Critique: $6.2 trillion global IT failure stats

Critique: $6.2 trillion global IT failure stats

Summary: A recent white paper stated the dollar cost of IT failure worldwide to be a staggering $6.2 trillion per year, or $500 billion each month. Here's a critique of that research.

SHARE:
TOPICS: CXO
13

Researchers often attempt to quantify the number of failed IT projects, usually reporting statistics that discuss failures as a percentage of the overall number of IT projects. These failure stats are primarily useful to the extent they illustrate that IT failure is a common and serious problem.

In a recent white paper, Roger Sessions, defined a model that quantifies the dollar cost of IT failure worldwide. Roger concludes that global IT failure costs the world economy a staggering $6.2 trillion per year, or $500 billion each month. Given these large numbers, it's no surprise the white paper has received much attention from this blog and elsewhere.

Reactions to the white paper are mixed, with both supporters and detractors lining up with their opinions. The debate even made popular techno-geek news site, Slashdot, demonstrating that Roger's conclusions hit a nerve.

In contrast to many of the opinions, IT failure expert consultant, Bruce Webster, wrote a serious "analytical critique" of the white paper and its calculations. In that piece, Bruce states:

Unfortunately, Sessions is fundamentally wrong in his numerical analysis, and his numbers are off by far more than “ten or twenty percent”. For the Federal Government alone, they are off by almost  a full order of magnitude (10x)....

[M]y conclusion here is that his estimate of $500 billion/month in lost direct and indirect costs due to IT systems failure just does not hold up, in my opinion.

You can read the detailed arguments, so I won't repeat them here. However, the critique generally states that Roger's approach:

  • Incorrectly interprets government-supplied data regarding IT failure rates and associated costs
  • "Ignores or confuses" failure rate data regarding new projects relative to existing systems
  • Wrongly extrapolates limited US data to the remainder of the world
  • Make unjustified assumptions regarding "direct and indirect costs," which have a substantial impact on the conclusions

THE PROJECT FAILURES ANALYSIS

By attempting to quantify the dollar cost of IT failure, the white paper adds a new and useful dimension to the usual failure statistics. The associated critique, which catalogs possible misinterpretations of incomplete data, will help anyone interested in refining the approach described in the white paper.

I do fault the critique in one important area: it does not offer an alternative to Roger's $6.2 trillion number. Perhaps the real number is "lots and lots smaller," but we need greater accuracy. Granted, the source data is not complete, but re-working Roger's original calculations based on different assumptions would be a worthwhile project.

My take. We still do not have accurate numbers on the annual world-wide cost of IT failure. Nonetheless, an incorrect guess based on rough and incomplete data is better than nothing at all.

Still, I remain most interested in a different model for quantifying failure: understanding the real-world costs of IT failure inside individual organizations.

Information derived from that model would help companies better link IT investment choices to outcomes, utility, and waste (or failure). Organizations could use that information to help guide better IT purchase and deployment decisions.

[Madcap researcher picture from iStockPhoto.]

Topic: CXO

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

13 comments
Log in or register to join the discussion
  • I doubt it

    I don't think the failure is so high.
    In other fields these 'failures' are called R&D, market research or something else, and it's hard to quantify what 'failure' means.
    Linux Geek
  • I honestly don't think the numbers are valid.

    The $6.2 trillion cost sound like an exaggeration.

    Regardless .... not all failures are real failures.

    #1- Many so called failures are due to budget cuts and not because the project was an actual failure. An incomplete project is not a failure ... just an unfinished product. And btw, I'm not talking about projects that are over budget, but projects canceled to save money.

    #2- Sometimes failures are more valuable than success. The long term value of lessons learns can sometimes outweigh the value of a flawless success.

    #3- Many failures are hidden Phoenixes. A lot of good work can raise from the ashes of a failed project. A project is never a single product, it is always a collection of ideas and products. Many of the supporting products can become base for another product, or an item that increases productivity in other projects. An idea can become the next best thing and can grow to be a million dollar winner.
    wackoae
  • RE: Critique: $6.2 trillion global IT failure stats

    I have (for my sins) worked for the US and UK Governments at different times. Believe me the ability to waste tax-payer hard earned money is awesome to behold!
    On the plus side the US Government had a better handle on it! Both projects worked, if a bit over-budget.
    The UK Government's ideas are outrageous! Ill-thought out, under-resourced, and sometime just plain wrong!
    For some resent failures in UK Government computer project alone see
    http://ukliberty.wordpress.com/government-it-gone-wrong/
    http://www.computerweekly.com/blogs/tony_collins/
    Agnostic_OS
  • Even if...

    6.2 Trillion were 50% off, and the true number was 3.1 Trillion, that's still a TON of money.

    My goal for 2010 - reduce that number, even if in my own little way.

    Garry
    gpolmateer@...
  • 66% is the worst number

    The first number to worry about is the 66% failure rate. It sounds like an IT failure when it is really a failure of management to spend time listening to the analysts say why the latest 'good idea' is an unworkable or too expensive bad idea.

    The 66% failure rate is a reflection of how incompetent in general management is in making big decisions about technology. They are not technological failures.
    minstrelmike@...
  • RE: Critique: $6.2 trillion global IT failure stats

    "I do fault the critique in one important area: it does not offer an alternative to Roger?s $6.2 trillion number."

    I disagree with you Michael, I don't think another inaccurate, unsupportable number will help. Most everyone agrees on the problem, I don't see many people arguing that there is no problem.

    That should be enough to motivate people to find a solution, we should not have to make up numbers.
    railmeat
  • Response from Roger Sessions

    Michael, thanks again for bringing this conversation to the forefront! Let me respond to Bruce Webster's criticism of my White Paper (available at http://bit.ly/3O3GMp). From my reading of his blog, he raises three main criticisms:

    1. My claim that 66% of federal IT dollars are invested in at risk projects is too high and my claim that the number is increasing by 15% per year is wrong.
    2. Most IT budget goes into maintaining existing systems, not building new systems.
    3. The real loss is $500B, not $6T.

    I'll briefly respond to each of these.

    On point 1, the number of major federal IT projects has remained relatively constant at about 800 for the last three years (2007-2009). According to the 2009 U.S. budget, In 2007, 30% of these were considered "troubled". In 2008, this rose to 43%. In 2009, this rose to 66%. Even Webster acknowledges these figures.

    Webster is correct that these are numbers of projects, not budget numbers, but they are the best possible guess and they all reflect the largest, most complex, most expensive projects on the federal list. Therefore it is extremely likely that these project closely to budget.

    Furthermore, as I point out in my editorial in Perspectives of the International Association of Software Architects (Jan, 2009, available at www.objectwatch.com/white-papers), this number is almost certainly an underestimate because of the extensive (and highly questionable) practice of re-baselining.

    On Point 2, Webster criticizes my numbers because he says that most budget goes into existing systems. This is true, but this, too, represents a failure. The reason that so much money needs to be spent on existing systems is because they were so poorly designed in the first place! Had they been designed in such a way that complexity had been properly managed, the 90% that Websters rightly claims is spent on existing systems could be reduced dramatically. This is not, as Webster implies, part of the direct cost of success, this is one of the many indirect costs of failure!

    On Webster's third point. He claims that the real cost is $500B, not $6.2T as I claim. However unlike my paper, Webster gives absolutely no basis for where he came up with this number. As this shows, it is easy to criticize somebody else who has presented a working model for the cost of complexity. It is a lot more difficult to come up with one yourself.

    Finally, let me say that Webster has apparently missed the main point of the white paper, which is not the exact cost of IT failures, but a practical approach to reducing the complexity of these IT systems that is causing these failures! This topic, which takes up 15 of the 20 pages of text, is totally ignored in his analysis.

    For the CIO, the bottom line is simple. If you don't believe that the complexity of your IT systems is choking you, then my White Paper has absolutely nothing to offer you. If, on the other hand, you believe that the complexity of your IT systems IS a major cost factor for your organization, then you owe it to your stakeholders to look more closely at the approaches I claim can reduce that complexity.
    roger@...
  • It is very misleading to say that Roger Sessions

    [i]... defined a model that quantifies the dollar cost of IT failure worldwide[/i]."

    Roger Sessions simply gave an order-of-magnitiude estimate using the same approach as Nobel Prize-winning physicist Enrico Fermi:

    http://www.education.com/activity/article/Fermi_middle/

    http://physics.suite101.com/article.cfm/fermi_problems_physics_estimation

    Roger Sessions explicitly says: "[i]The numbers are estimates, of course. The precise numbers are not the point. The sheer magnitude of the numbers is what is important[/i]."

    This estimation approach is a respected and common practice in science and engineering. It is very useful for getting a grip on a problem for which detailed data are not available. One of the basic principles of an order-of-magnitude estimate is that errors (high or low) in any one of the terms in the estimate tend to be canceled out by errors in other terms.

    I agree with Michael Krigsman: "[i]I do fault the critique in one important area: it does not offer an alternative to Roger's $6.2 trillion number[/i]."

    Saying "I don't think these numbers are right" is pointless. Roger's estimate is not exact. We all know that. Stating the obvious doesn't contribute anything to discuss.

    Saying that "I don't like these numbers" is a feeling. Feelings are great. Share them with a best friend or significant other. Feelings don't add anything for discussion in engineering.

    Saying that one number or another is "bad" is a start. So take the next steps and give a different number, give a reasoned basis for the number, and give a revised estimate using the number.

    Or submit a different estimate using an entirely different approach. Describe the approach, justify assumptions and estimates, and turn the crank for a result.

    Either of the above two approaches lets us all join in a calm, rational, and quantitative dialogue.
    Cardhu
  • A Third Option...

    ... re-estimate a different failure cost using Roger Session's sources, but with different assumptions. For example, if Bruce feels 66% is to high, he should pose another percentage and state the reasons for the althernative, and so on.
    elizab
  • A failed critique

    The major points have already been made above, but suffice it to say
    that Webster's critique as presented strikes me as far weaker in its
    approach and conclusions than Sessions' original paper, which, again,
    simply took an estimating "order of magnitude" tack on the way to
    discussing something much more germane: an approach for reducing
    complexity and hence reducing waste. The critique quarrels with the
    pennies and misses the pounds, in other words.

    I don't agree with Roger Sessions in all respects, by any means, and
    have posted my own elaboration of my thoughts on this on my
    "CTO/CIO Perspectives" blog, in a post titled "Complexity isn't simple:
    multiple causes of IT failure", at
    http://www.peterkretzman.com/2009/11/16/complexity-isn?t-
    simple-multiple-causes-of-it-failure/

    In a nutshell, I posit in my blog that there's far more to be said and
    done about the myriad causes of IT complexity than the one area on
    which Roger focuses. That said, Roger has some great analysis and a
    viable approach to dealing with the complexity he identifies, and his
    paper is well worth reading for that alone.

    Bottom line: as Roger states above, <i>"Webster has apparently missed
    the main point of the white paper, which is not the exact cost of IT
    failures, but a practical approach to reducing the complexity of these IT
    systems that is causing these failures!"</i>

    Peter Kretzman
    Peter Kretzman
    • Corrected link in the above

      The link in my above comment got chopped by a line break: here's a
      shortened pointer to that article on my blog:
      http://bit.ly/6MqDHN
      Peter Kretzman
      • And a very worthwhile article it is

        Thanks for sharing it.

        A most fertile topic for discussion.
        Cardhu
    • I agree completely

      Especially with the part about complexity not being the sole cause for engineering (including IT) project failure.
      Cardhu