X
Finance

New IT project failure metrics: is Standish wrong?

The Standish Group's Chaos Report describes two-thirds of IT projects as being "challenged." Now, three academics have published a report suggesting these numbers are flawed, and that only one-third of IT projects actually fail.
Written by Michael Krigsman, Contributor

The Standish Group's Chaos Report describes two-thirds of IT projects as being "challenged." Now, three academics have published a report suggesting these numbers are flawed, and that only one-third of IT projects actually fail. If this new report is accurate, it represents a significant departure from common perceptions of IT project risk and failure. The research was conducted by Chris Sauer, Andrew Gemino, and Blaize Horner Reich who described their results in an article titled, The impact of size and volatility on IT project performance.

EXECUTIVE CONCLUSIONS

The following table outlines management recommendations made by the authors on the basis of their research:

DIFFERENCES RELATIVE TO PREVIOUS RESEARCH

To understand why the results differ from Standish, I asked Blaize Horner Reich, one of the authors, to explain. Blaize sent me unpublished material describing differences between her group's methodology and that used to create the Chaos Report, which remains the most widely-quoted measure of software development failures.

In contrast to Standish, Blaize and her colleagues used project managers, rather than executives, as respondents. Since project managers typically have more detailed and specific project knowledge than do executives, this should yield more accurate and detailed research results. In addition, the group only researched most recent projects, presumably reducing the effect of poor memory on their findings.

A second important difference between the new research and Standish derives from assumptions made by the authors regarding the best way to classify IT projects. Instead of following the Standish model, which characterizes projects as failed/challenged/successful, the authors used a more finely-grained classification system, which emerged as they analyzed the data.

Here is a list of the failure/success categories that arose out of the research:

MEASUREMENT VARIABLES AND DATA

The authors examined two primary measures as determinants of project risk: size and volatility.

Project Size

The authors measured size according on the basis of four dimensions:

  • Effort (measured in person-months)
  • Duration (measured in elapsed time)
  • Team size
  • Budget

In summarizing the impact of size-related factors on project performance, the authors write:

Overall, increases in the size of a project mean increased risk, even for experienced project managers. However, conventional wisdom that restricts project size using budget or duration is somewhat misguided. A focus first on effort, then on team size and duration will limit risk of underperformance.

Surprisingly, we found that one-quarter of projects underperform however small their size. Even projects with budget less than £50,000, effort less than 24 person-months, duration shorter than six months, or team size of less than five experienced 25% risk. There is a significant level of risk regardless of size.

This size data is reported in the tables below:

Volatility

Project volatility was measured along two dimensions:

  • Governance volatility (measured by changes in project manager or executive sponsor)
  • Target volatility (measured by changes in schedule, budget, and scope).

The authors summarize volatility as a determinant of failure:

Projects with no change in key personnel faced a 22% risk of underperforming, whereas projects with two or more changes faced a risk of more than 50%. Projects with nine or fewer target changes faced no more than a 33% risk of underperforming whereas projects with more than nine changes faced a risk over 50%. These results suggest volatility is strongly related to performance, and indicate the importance of project governance.

The data regarding volatility is show below:

AUTHORS' SUMMARY CONCLUSIONS

The authors offer these managerial insights:

Our results indicate that while approximately 9% of IT projects are abandoned, another 7% consistently overdeliver on original project targets.

What our results suggest is that IT project managers cannot accept all of the responsibility of delivering projects successfully. Top management and steering committees have a significant role to play in managing project risk. Ambitious-sized projects, moving targets, and managerial turnover present challenges for IT projects that stretch even experienced project managers and result in greater variances. Effective oversight of projects can help project managers respond to these challenges.

MY CONCLUSIONS

This research is serious, credible, and cannot be ignored. In addition, it's consistent with other recent studies of IT failure. I recommend following the authors' work to see how it develops over time.

As the authors correctly assert, IT failure rates remain high, and responsibility for success clearly lies with both executive management and the project team. The risk reduction strategies outlined in the executive conclusions table (at the top of this post) are well-considered and should be followed by organizations implementing IT projects.

Although the research methodology and data may differ from previous studies, the management conclusions and action items are fundamentally in accord with best practices for avoiding failed IT projects.

Editorial standards