﻿ Annual cost of IT failure: \$6.2 trillion | ZDNet

# Annual cost of IT failure: \$6.2 trillion

Summary: The total annual cost of worldwide IT failures is \$6.2 trillion dollars, according to calculations performed by Roger Sessions, über-expert enterprise architect and CTO of ObjectWatch.

SHARE:
TOPICS: CXO
41

The total annual cost of worldwide IT failures is \$6.2 trillion dollars, according to calculations performed by Roger Sessions, über-expert enterprise architect and CTO of ObjectWatch.

Roger presents his analysis in a blog post:

According to the World Technology and Services Alliance, countries spend, on average, 6.4% of the Gross Domestic Product (GDP) on Information Communications Technology, with 43% of this spent on hardware, software, and services. This means that, on average, 6.4 X .43 = 2.75 % of GDP is spent on hardware, software, and services. I will lump hardware, software, and services together under the banner of IT.

According to the 2009 U.S. Budget, 66% of all Federal IT dollars are invested in projects that are “at risk”. I assume this number is representative of the rest of the world.

A large number of these will eventually fail. I assume the failure rate of an “at risk” project is between 50% and 80%. For this analysis, I’ll take the average: 65%.

Every project failure incurs both direct costs (the cost of the IT investment itself) and indirect costs (the lost “opportunity” costs). I assume that the ratio of indirect to direct costs is between 5:1 and 10:1. For this analysis, I’ll take the average: 7.5:1.

To find the predicted cost of annual IT failure, we then multiply these numbers together: .0275 (fraction of GDP on IT) X .66 (fraction of IT at risk) X .65 (failure rate of at risk) X 7.5 (indirect costs) = .089. To predict the cost of IT failure on any country, multiply its GDP by .089.

Based on this, the following gives the annual cost of IT failure on various regions of the world in billions of USD:

```REGION        GDP (B USD)  Cost of IT Failure (B USD)
World         69,800       6,180
USA           13,840       1,225
New Zealand   44           3.90
UK            2,260        200
Texas         1,250        110```

THE PROJECT FAILURES ANALYSIS

Quantifying the cost of failure is an exceedingly important step in communicating the scope and breadth of this worldwide problem. Roger Sessions deserves our thanks for doing so.

The calculations are highly dependent on the underlying assumptions. Some of the key variables include:

• Definition of "failure"
• Rates of failure
• Global variation in rates across country

Although Roger's calculations are not precise, they paint a clear, directional picture suggesting the financial impact of IT failures.

[[Image from iStockphoto.]

Topic: CXO

Michael Krigsman is recognized internationally as an analyst, strategy advisor, enterprise advocate, and blogger. Interact with Michael on Twitter at @mkrigsman.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

## Talkback

• ### I'm not buying their claims..

It looks like they're declaring all IT activities as a failure which is patently ridiculous..

The companies I support rarely experience a failure. I do the product evaluations up front and determine if a product is worthy of purchase and deployment.

I use the motto, if it isn't broke, don't fix it (or replace it). Since then each software vendor has come out with 6,7,8 new versions of software, but I'm not buying them.

I spec in low energy consumption systems built from off the shelf components for maximum reliability (no Gateways, Dells, HP, etc). I also instruct them to shut them down when they leave, less run-time for moving parts == higher reliability.

Most IT maintenance activity is centered around the nightly server backup. After that, it's the occasional keyboard, mouse, CPU fan, disk drive, minor software glitch, etc. I don't consider them to be IT failures.
• ### Definition of failure

Failure is not an all or nothing proposition. I generally consider failed projects to meet one of these three criteria:

1. Late
2. Over-budget
3. Doesn't meet planned objectives

From that standpoint, the numbers in this post are actually conservative and probably understate the problem.
• ### Still not buying it.. Cost estimate wayy out line..

6.2 trillion dollars is enough money to pay one hundred million(100,000,000) IT persons 62K\$ per year. I don't think so, the number is WAYYY out line.

As for your criteria, those don't cut it since most management are NOT trained in the sciences, therefore incompetent to manage most technology driven projects.

I.E. A vast majority of management wants their projects completed for free, completed before it was proposed, and will manufacture gold coins on demand.

In the real world, technology projects are mostly dictated by principles of science, and the skill levels of those implementing them.
• ### Check the math or assumptions

Opinions that something is "way out of line" are fine, but what is the point of comparison.

Stating a strong opinion with out comment / criticism of the math or assumptions doesn't add anything.
• ### Indirect costs wayy out of line.. 7.5x multiplier..

For the most part, indirect costs are just the expenses lost in the project(at most 1x, Author assumes 7.5x). A as a majority of IT projects and upgrades are replacing some other existing system/proceedure.

And most IT upgrades have high secondary value even if the initial project was a failure(new PC's, etc).

I.E. A new \$300 fax machine (operational costs \$30/month verses expensive 400\$/mo fax-copier lease) still functional even though the fax to/from email function doesn't work 100% of the time.

I would place the overall cost of IT failure number in the 50 to 300 billion dollar range.

• ### Besides other major errors.. what is the CODB without IT ???

Yep, that's right what is the Cost of Doing Business WITHOUT IT infrastructure???

No computers, no cell phones, no GPS, nothing more than some TI calculators and landline phones. That figure would be so high that one really can't put a price on it.

Meanwhile, the IT infrastructure we have won't last forever, without replacement it will fail. Then it's your back to the first premise, what is the cost of NO Information technology??

Lastly, it makes the ludicrous assumption that major federal IT projects (worthy of analysis) == ALL projects public and private.

Larger the projects have a near zero percent chance of 100% success. Every large project must make design and implementation compromises along the way. I.E. The spec writers aren't perfect, and make errors. Larger projects == more spec errors.
• ### So anything learned while developing these

"failed projects" are not used in a new, sucessfull project?

That is the problemn with studies like these.

If 9 out of 10 objectives are met, it is a failure.

Yet those 9 objective that do work go on to save the company money: if the goal was to spend 1 million dollars to save 10 million, and end up only saving 8 million, is that still considered a "failure", as the objective was not met?

I will take that kind of failure any day.
• ### definition of failure

I think there are lots of gray areas when you try to define failure.

Late and over-budget projects that do meet planned objectives shouldn't be considered failed projects, depending on how late and over-budget they've actually got in the end.

• ### RE: Annual cost of IT failure: \$6.2 trillion

Failure is inevitable for risk takers. if you are not
prepared to take a risk, bury your money in the garden!
Most people prefer to invest their money and innovate and
hope they are successful.Failure however is inevitable.
The success rate therefore must be the amount success
exceeds failure in a risk taking environment. I think the
world manages usually to manage to achieve a good success
to failure rate in technology. I wish we could say the
same for other economic sectors like banking and the auto
mobile industry.
• ### And there is not a crisis in the software industry ?

Iv'e got artucles up articles of software desasters going back to the first days of computing.

Humans have been able to build space shuttles, massive jumbo jets, huge engineering endevours, and yet the "Software engineers" are still stuck in the 60's and 70's.

We have CPU integrated circuits with 10's of millions of components, just as or more complex than the software that runs on it, look at how integrated circuits have evolved and the science of making CPU's.

Why is the software industry not able to work to the engineering standards that every other industry is required too.

ISO9001, quality assurance programs, TQM, code reviews, unit testing. Object orientation.

Why is there not a rock solid set of basic computing function tools, that have been tested and QA'd that are efficient and well engineered ?

A passenger jet aircraft has millions of complex components and has a much more hostil environment to deal with, yet their reliability is amazing.

What happens if one crashes, it's studies and the cause is found out, and whatever needs to be dont so that will not happen again is done.

An engineering who works on the design of a jumbo jet would be instantly fired if he said to his boss, look there are alot of components here a few "bugs" wont hurt.

Well and lax attitude is not a part of engineering, it's a backyard hacker, someone unskilled and untrained, stabbing in the dark hoping they will fall upon something that works.

This is the state of the art of computer "science".

CPU's have gone from the 4004 CPU little more than a basic function calculator to the multi-Gig 64-Bit machine in 30 years.

In that same 30 years we've gone from C to shining C. (mabey with a few enhancements tacked on along the way).

Until software engineering becomes just that an engineering disipline with all that entails the cost of not "DOING THE RIGHT THINGS RIGHT FIRST TIME" (Quality Assurance term) your CoNC (Cost of Non-confirmance) will be in the trillions of dollars.

How long will it be before Government leglisates to "clean up" the industry?

• ### Philosophical Differences

I think you'll find the difference between AMD and Intel to be quite startling.

AMD is the of the mindset that they don't release a product until it is ready. Intel, on the other hand...

The same can be said of your beloved Microsoft... Some even say that Microsoft's lack of quality has lowered user expectations.

And then there is the whole Microsoft development tools - Click and Drool.

They're marketed to managers as being 'so easy', even non-programmers can use them! See - no training needed! Saves the company money!

• ### RE: Philosophical Differences

SpideyMike,

I think you hit the salient points when you said:

[i]The same can be said of your beloved Microsoft... Some even say that Microsoft's lack of quality has lowered user expectations.

And then there is the whole Microsoft development tools - Click and Drool.

They're marketed to managers as being 'so easy', even non-programmers can use them! See - no training needed! Saves the company money![/i]

To your first comment, why in the world do people [b]put up with[/b] the crap that M\$ foists on the world. It must be due to the fact that (purchasing) decisions are [b]not[/b] made by IT people; but by clueless managers.

Second point, [u]Click and Drool[/u] - I like that suggestion. OH, wow, [b]eye candy,[/b] and the resources that it uses slows your computer down.

Third point, is that just another way of saying that [b]if you have enough monkeys throwing bananas at a keyboard, they [u]might[/u] be able to write some executable code??[/b] That sounds like a variant of the [u]Geico[/u] tag line: [i]it's so easy, even a Caveman can do it![/i]

IMHO - M\$, epic FAIL

So much capital wasted feeding M\$'s cash trough that could be put to better uses.
• ### Well, it beats wasting it on

dead end things like Linux or Apple, would you not agree?

[i]It must be due to the fact that (purchasing) decisions are not made by IT people; but by clueless managers.[/i]

Nice way of selling everyone so short. Guess you must be the only smart IT person, the rest are just plain stupid.

[i]Second point, Click and Drool - I like that suggestion. OH, wow, eye candy, and the resources that it uses slows your computer down.[/i]

Selling eveyone else (but yourself) short again?

[i]Third point, is that just another way of saying that if you have enough monkeys throwing bananas at a keyboard, they might be able to write some executable code?? That sounds like a variant of the Geico tag line: it's so easy, even a Caveman can do it![/i]

Guess you are not a programmer, otherwise you would understand alot more, but then it sounds like you are just selling everyone else (but yourself) short again

IMHO - Your Post = Epic FAIL!

So why even bother?

The Mythical Man-Month by Fred Brooks. The examples you cite from other disciplines depend on drawings and other aids to ensure the developer and the customer agree on the end result before they begin. We try to do this in software as well, as we have continued over my 40 years in the business, to fiddle and shape our life cycle process, with mixed success.

Which brings me to Brooks. As he points out, software is neither visual nor visualizable - usually, the software engineer and the team members are not sme's in the business process and there is no way to be sure what they are building is what is needed. This is the same limitation, by the way, that defies the notion of a valid software patent.

This is not to excuse the software industry. Admittedly, development carries some risk, but practitioners add to the risk by accepting in-process changes and other violations of sound engineering practices.

There are those among us who believe software engineering will never reach the maturity level of other disciplines, but we sure as heck can improve on today's record.

• ### Acountability

I used a phrase a while back to explain this in the security context of application development, but it applies to what you mention too. "Features are added by programmers with code, security is added by lawyers with EULAs."
• ### All the engineering and QA in the world

don't guarantee 100% quality. Sure, the space shuttle and jumbo jets fly fairly reliably. There was and is also a very long development cycle for each new model produced. And various parts are designed, prototyped, tested, redesigned and retested through enough iterations that the end result is something that works [i]most of the time[/i]. It may still fail at some point due to testing procedures not stressing a component enough, a machined tolerance not allowing for enough wear or expansion from heat (even though the part was built to specifications), or just plain old human error, careless or otherwise.

Software isn't any different. Any conscientious software engineer or programmer will make every attempt to assure that the end product functions to specifications, but upper management and/or marketing regularly provide roadblocks to allowing success (and guaranteeing failure).
• ### A few counter-examples

Enigneering works well and more or less predictably when there is nothing new. When new things are tried, disasters do happen and money and lives are lost. Just a few random examples:

- Icarus
- Titanic
- Collapsing bridges
- Various power plant disasters
- Challenger and Columbia
- LHC accidents
- Boeing delays of the new plastic aircraft
- F22 program (for it being cancelled)
- Various car, battery, etc recalls
- Finally, the banking system!

When you speak of electronics industry, think of it: for many years we still use the same basic CPU architecture and even the same set of machine commands, traceable back to 8008. Almost all other CPU architectures are all but failed.

Also, the rest of engineering disciplines produces devices that operate within narrow limits. You can't just throw random input (voltage, etc) at CPUs. Airplanes must fly within a narrow speed band at high altitudes or they will fall or disintegrate. Nobody allows passengers to play with different parts of the plane, customize it by uninstalling wings or deleting landing gears. There are also no crooks trying to siphon the fuel nowhere near the planes.

Yet we grew to expect the software to work under very harsh conditions yet behave the way we expect, each of us, regardless of our different tastes.

Yet again, we expect that we shouold not pay for it. We pay, on average, higher price for an airline ticket. Granted, there is cost of production and service and fuel, but an airplane may fly for 10 years or more, yet we expect an OS to be refreshed a couple of times a year, for free. A recent OS upgrade is priced less than 3 visits to a cinema.

There are two factors that make the software industry look like a disaster: very high expectations about new features and having these done for free. Try to find examples of these outside the software industry and you'll find plenty.
• ### RE: Annual cost of IT failure: \$6.2 trillion

I'm not buying the \$6.2T number since the GWP (Gross World Product) is \$55T. The only failure is this article.

• ### What about the math?

Just because a number seems high to you does not make it wrong.

If you want to advance this conversation, then critique the math or the assumptions. Simply stating your opinion without factual basis is fine, but rather pointless.