Early warnings: The IT project failure dilemma (part 1)

High IT failure statistics highlight the lack of an effective industry-wide early warning system to prevent these problems.
Written by Michael Krigsman, Contributor

NOTE: Also read part 2 of this two-part series.

Research tells us that 30- and 70-percent of IT projects fail in some important way, leading to global economic waste totaling perhaps trillions of dollars. These statistics highlight the lack of an effective industry-wide early warning system to prevent failure.

CIO hired gun and project turnaround specialist, Peter Kretzman, tackled the failure prediction in an important article that is republished here as a guest post. CIO Magazine calls Peter, "a 25-year IT and online veteran, [who] shares thoughts on focusing product and application development as well as enhancing and maintaining world-class operations. He also points out that many departments survive by hiding inefficiencies, oversights and missed opportunities.”


In part 1, reprinted below, Peter laments the difficulty associated with predicting which projects will succeed or become challenged.

Part 2 describes solutions to the challenge of detecting early warning signs that may cause trouble on IT projects.

Significantly, Peter correctly focuses on the people issues associate with technology-related breakdown. Please read his comments on that topic carefully.


Thinking about how to prevent big system project failure has somehow always reminded me of the Will Rogers quote: “Don’t gamble; take all your savings and buy some good stock and hold it till it goes up, then sell it. If it don’t go up, don’t buy it.”

In other words, with big projects, by the time you realize it’s failed, it’s pretty much too late.  Let’s think a bit about the reasons why, and what we can do to change that.

First off, I’ve never seen a big project fail specifically because of technology. Ever. And few IT veterans will disagree with me. Instead, failures nearly always go back to poor communication, murky goals, inadequate management, or mismatched expectations.  People issues, in other words.

So much for that admittedly standard observation. But as the old saying goes, “everyone complains about the weather, but no one does anything about it.” What, then, can we actually do to mitigate project failure that occurs because of these commonplace gaps?

Of course, that’s actually a long-running theme of this blog and several other key blogs that cover similar topics. Various “hot stove lessons” have taught most of us the value (indeed, necessity) of fundamental approaches and tools such as basic project management, stakeholder involvement and communication, executive sponsorship, and the like.  Those approaches provide some degree of early warning and an opportunity to regroup; they often prevent relatively minor glitches from escalating into real problems.

But it’s obvious that projects still can fail, even when they use those techniques.

People, after all, are fallible, and simply embracing an approach or methodology doesn’t mean that all the right day-to-day decisions are guaranteed or that every problem is anticipated.  Once again, there are no silver bullets.

One of the problems, as I’ve pointed out before, is that it can actually be surprisingly difficult to tell, even from the inside, how well a project is going.  Project management documents can be appearing reliably,  milestones met, etc.  Everything looks smooth. Yet, it may be that the project is at increasingly large risk of failure, because you can’t address problems you haven’t identified.

This is particularly so because the umbrella concept of “failure” includes those situations where the system simply won’t be adopted and used by the target group, due to various cultural or communication factors that have little or nothing to do with technology or with those interim project milestones.

Moreover, every project has dark moments, times when things aren’t going well. People get good at shrugging those off, sometimes too good.  Since people involved in a project generally want to succeed, they unintentionally start ignoring warning signs, writing those signs off as normal, insignificant, or misleading.

I’ve been involved in any number of huge systems projects, sometimes even “death march” in nature.  In many of them, I’ve seen the following kinds of dangerous “big project psychologies” and behaviors set in:

  • Wishful thinking: we’ll be able to launch on time, because we really want to
  • Self-congratulation: we’ve been working awfully hard, so we must be making good progress
  • Testosterone: nobody’s going to see us fail. We ROCK.
  • Doom-and-gloom fatalism: we’ll just keep coming in every day and do our jobs, and what happens, happens.  (See Dilbert, virtually any strip).
  • Denial: the project just seems to be going badly right now; things are really OK.
  • Gridlock: the project is stuck in a kind of limbo where no one wants to make certain key decisions, perhaps because then they’ll be blamed for the failure
  • Moving the goal posts: for example, we never really intended to include reports in the system. And one week of testing will be fine; we don’t need those two weeks we planned on.

An adroit CIO, not to mention any good project leader, will of course be aware of all of these syndromes, and know when to probe, when to regroup, when to shuffle the deck.  But sometimes it’s the leaders themselves who succumb to those behaviors.  And for people on the project periphery, such as other C-level executives? It’s hard to know whom to listen to on the team, and it’s definitely dangerous to depend on overheard hallway conversations: Mary in the PMO may be a perennial optimist, Joe over in the network group a chronic Eeyore who thinks nothing will ever work, and so on.  There are few, if any, reliable harbingers of looming disaster.


Thank you to Peter Kretzman for permission to reprint this blog post.

Editorial standards