X
Business

Why Project Reviews Usually Suck

Vernon Riley asks, “Do You Want to Discover the Truth About Your Projects?,” as the lead-in to a discussion of project reviews.
Written by Michael Krigsman, Contributor

Vernon Riley asks, “Do You Want to Discover the Truth About Your Projects?,” as the lead-in to a discussion of project reviews. Vernon offers examples of how biased analysis prevents accurately understanding a project’s risks and potential failure points. Of course, lack of critical oversight ultimately facilitates denial and information-hiding, which is integral to many failure situations.

Unfortunately, it’s difficult to get under the surface of the dynamics that drive success and failure of any complex project. Many project reviews consist of simple checklists and progress metrics, which is fine as far it goes. However, these measuring systems often ignore more subtle warning signals, which is one reason so many failures remain undetected until large sums of money and time have been wasted.

On the other hand, there is a better way to analyze projects. Let’s look at this alternative, using executive sponsorship as an example.

Every experienced project manager knows that gaining executive sponsorship is critical to the success of IT projects. However, measuring whether the level of executive sponsorship is sufficient to support project success is not at all straightforward. In contrast, it’s relatively easy to measure more quantitative issues, such as tracking whether the project is correctly following a prescribed implementation methodology. The difficulty in measuring qualitative, organizational and political risk factors is one reason that so many project reviews produce virtually useless results.

When measuring an ephemeral concept such as executive sponsorship, it’s tempting to directly ask project participants whether their management provides sufficient support. However, that’s a loaded question and not likely to return an accurate answer. In fact, analysis conducted by Asuret suggests that such qualitative dimensions can best be measured when they are converted to observations about easily-described circumstances.

For example, one can measure executive sponsorship by asking project participants about the following:

  • Management commitment
  • Management role
  • Project champion
  • Management stability

Drill down into each of these issues and you uncover a piece of the puzzle. Aggregated together, these dimensions create a broad picture of an organization’s capability to support the degree of executive sponsorship required to drive project success. So, executive sponsorship can be measured on a relative basis by making an inference regarding neutral, objective, and describable circumstances inside an organization.

This technique can be extrapolated across all key dimensions that impact project success or failure. The result is a relative, yet quantitative, ranking of potential points of risk. And that, my friend, is the right way to conduct a project review.

If you’ve seen a more effective method for understanding the dynamics around non-technical complexity on a project (IT or otherwise), let me know.

-----

Editorial standards