IT Week ran a story saying that “IT managers should learn to prioritise in order to avoid failing projects.” The piece adds:
IT directors must learn to say no to unreasonable or unviable requests from business executives if they are ever to tackle IT project’s chronically high failure rate.
Speaking at a roundtable debate after the event, Adrian Dabell, application development director at ACE Insurance, said it was frustrating to see IT criticised for project failure rates of around 50 percent when it was the unreasonable demands and specification changes imposed by the business that often caused projects to fail.
Dabell said that if IT leaders are to improve their project success rate then they must be willing to tell the business when projects are likely to fail. ” You need transparency between IT and the business,” he said. “You need to say if you have too much else on or if it is just not possible.”
On one level, this is good advice on its face. A deeper look, however, calls some assumptions into question. Is it really straightforward to predict in advance which projects will fail? Most responsible CIOs already try their best to weed out the bad projects before they even begin.
While some ill-fated projects are certainly doomed from the start, many die because of mismanagement, customer inexperience or denial, vendor greed, poor planning, and so on. It’s this combination of many factors that tends to result in bad projects. Isolating one specific factor, to the exclusion of the rest, ignores the complex reality that typically lies behind IT project failures.