New IT project failure metrics: is Standish wrong?

New IT project failure metrics: is Standish wrong?

Summary: The Standish Group's Chaos Report describes two-thirds of IT projects as being "challenged." Now, three academics have published a report suggesting these numbers are flawed, and that only one-third of IT projects actually fail.

TOPICS: Banking, CXO

The Standish Group's Chaos Report describes two-thirds of IT projects as being "challenged." Now, three academics have published a report suggesting these numbers are flawed, and that only one-third of IT projects actually fail. If this new report is accurate, it represents a significant departure from common perceptions of IT project risk and failure. The research was conducted by Chris Sauer, Andrew Gemino, and Blaize Horner Reich who described their results in an article titled, The impact of size and volatility on IT project performance.


The following table outlines management recommendations made by the authors on the basis of their research:

New research into IT failure rates


To understand why the results differ from Standish, I asked Blaize Horner Reich, one of the authors, to explain. Blaize sent me unpublished material describing differences between her group's methodology and that used to create the Chaos Report, which remains the most widely-quoted measure of software development failures.

In contrast to Standish, Blaize and her colleagues used project managers, rather than executives, as respondents. Since project managers typically have more detailed and specific project knowledge than do executives, this should yield more accurate and detailed research results. In addition, the group only researched most recent projects, presumably reducing the effect of poor memory on their findings.

New research into IT failure rates

A second important difference between the new research and Standish derives from assumptions made by the authors regarding the best way to classify IT projects. Instead of following the Standish model, which characterizes projects as failed/challenged/successful, the authors used a more finely-grained classification system, which emerged as they analyzed the data.

New research into IT failure rates

Here is a list of the failure/success categories that arose out of the research:

New research into IT failure rates


The authors examined two primary measures as determinants of project risk: size and volatility.

Project Size

The authors measured size according on the basis of four dimensions:

  • Effort (measured in person-months)
  • Duration (measured in elapsed time)
  • Team size
  • Budget

In summarizing the impact of size-related factors on project performance, the authors write:

Overall, increases in the size of a project mean increased risk, even for experienced project managers. However, conventional wisdom that restricts project size using budget or duration is somewhat misguided. A focus first on effort, then on team size and duration will limit risk of underperformance.

Surprisingly, we found that one-quarter of projects underperform however small their size. Even projects with budget less than £50,000, effort less than 24 person-months, duration shorter than six months, or team size of less than five experienced 25% risk. There is a significant level of risk regardless of size.

This size data is reported in the tables below:

New research into IT failure rates


Project volatility was measured along two dimensions:

  • Governance volatility (measured by changes in project manager or executive sponsor)
  • Target volatility (measured by changes in schedule, budget, and scope).

The authors summarize volatility as a determinant of failure:

Projects with no change in key personnel faced a 22% risk of underperforming, whereas projects with two or more changes faced a risk of more than 50%. Projects with nine or fewer target changes faced no more than a 33% risk of underperforming whereas projects with more than nine changes faced a risk over 50%. These results suggest volatility is strongly related to performance, and indicate the importance of project governance.

The data regarding volatility is show below:

New research into IT failure rates


The authors offer these managerial insights:

Our results indicate that while approximately 9% of IT projects are abandoned, another 7% consistently overdeliver on original project targets.

What our results suggest is that IT project managers cannot accept all of the responsibility of delivering projects successfully. Top management and steering committees have a significant role to play in managing project risk. Ambitious-sized projects, moving targets, and managerial turnover present challenges for IT projects that stretch even experienced project managers and result in greater variances. Effective oversight of projects can help project managers respond to these challenges.


This research is serious, credible, and cannot be ignored. In addition, it's consistent with other recent studies of IT failure. I recommend following the authors' work to see how it develops over time.

As the authors correctly assert, IT failure rates remain high, and responsibility for success clearly lies with both executive management and the project team. The risk reduction strategies outlined in the executive conclusions table (at the top of this post) are well-considered and should be followed by organizations implementing IT projects.

Although the research methodology and data may differ from previous studies, the management conclusions and action items are fundamentally in accord with best practices for avoiding failed IT projects.

Topics: Banking, CXO

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • Failure rates

    This is great stuff, but there's a huge issue of methodology that the researchers apparently overlooked: By interviewing project managers, they automatically excluded all the projects that don't *have* PMs (and that's a lot of them). The projects that are run by amateur, part-time managers or committees are generally the ones that crash and burn. If you factor these fiascos into the equation, I suspect the overall failure rates will look a lot more like the numbers that Standish finds.
    tinfoil hat
    • Excellent point

      Very hard to argue this, and I hope the researchers take note.
    • Another good observation

      For those that could benefit from it then, this logic points out that the appointment of a "qualified" project manager will double your opportunity to succeed.

      My question would be around the respondents selection based on "convenience" for both interviews. If you are interviewing executives the one's most likely to respond are those that believe that things can be improved. For project managers, the most likely to respond are those who are proud of their work.
  • RE: New IT project failure metrics: is Standish wrong?

    Projects are large and complex and the rate of change is accelerating, demanding larger and more complex projects. It seems to me as if the call is for more/more accurate testing processes to insure that each phase and component of the project meets quality and performance standards.
  • Obvious Omission in Discussion

    of these research studies is the potential effect of the difference in perspective of the participating subjects. Executives may tend to be more critical of project performance regardless of external factors, while project managers might not be objective concerning the results of their work. Just as authors might have a different opinion of their writing than editors. This could explain the one third/two thirds flip-flop in the findings. One recommendation should be the replication of the different research methods with the alternative sets of subjects before considering any finding as conclusive.
  • What's the target

    There's another potential explanation for the difference. PMs may well measure themselves against their original plan - before the budget was cut, the time shortened, the requirements changed, and changed again. Their original Plan had plenty of contingency, resources, etc., some or all of which gets cut in the initiation of the project. The Execs focus on the there expectations - full requirements, post-budget cut costs, the PMs see things differently - otherwise they'd all give up....
    • both are correct

      Both Standish and Blaize do excellent work.
      As stated in other posts, Standish and Sauer et al are researching different sample and units of measure. Having reviewed a number of projects at both exec and pm levels, I have found that execs "tend" to compare the project outcome to the original project concept charter or business case while project manager focus on the final project deliverable.
      As ZDNet@ comments, a lot can happen between day "zero" and project "end".
      The project manager can claim success if the methodology is strong, if all change requests have been reviewed and approved/rejected by the project sponsor <b>AND</b> the steering committee, and when the client signs off on the final deliverable.
      Also in portfolio/program management, inter project dependencies and priorities can cause Standish's "challenges" while individual project can be deemed successful, even if some are deferred or even terminated early.
  • The definition of IT implementation failure

    Unfortunately we really only hear about big projects that have failed. And the definition of "failed project" is not universal. Is it a time or budget blow out or under delivered agreed upon benefits or an under performing system ie; slow response time, or lack of user adoption. So really in order to compare over delivered with under delivered IT implementations you have to compare apples to apples.
    Corporate profiling