X
Business

Turn project failures into success stories

Commentary----Research shows that more than half of all large application development projects fail. But it doesn't have to be this way.
Written by Motti Tal, Contributor

Motti Tal
Motti Tal, OpTier
Commentary--How many IT projects fail today? Too many.

Research reports often show that more than half of all large application development projects fail—they fail to be delivered on time, within budget, or to meet established expectations. It doesn't have to be this way.

Assessing and improving quality of service is a critical phase in the application lifecycle process. In the pre-production phase of a new application or version, the goal is to understand how that application will behave in actual production, and to spot any potential flaws in programming, design, and configuration before letting it fly into real-world use.

To do this, IT managers try to test their applications for performance and quality of service. But the truth today is that tools that test performance and quality of service fail to provide a way for companies to correlate their load and performance tests with those tools that drill deep into each individual component of a transaction. As a result, testing and development teams are forced to guess where to focus their efforts, and they end up wasting a lot of time looking in the wrong places for trouble spots. The end result is that too many low-quality applications are being released into production.

Yes, their tools are letting them down. But all failed initiatives can't be blamed on bad tools. Companies can do more, but often don't, to ensure their projects make the leap from failure to success.

The first essential step is realistic and effective planning. As simple as it sounds, many times, the development of new applications or versions often fails because testing occurs too late in the process—and too close to the actual delivery date. So plan and allocate time for these kinds of tests, to make sure that you have as much time as possible to ensure the application is not only functional, but meets or exceeds established service level requirements. The earlier you can identify scalability issues that need to be addressed, the better for everyone. Recently, we encountered a situation in which a company had left itself one day—a single day—to conduct its performance testing. The last thing you can do in situations like this is ask for more time. You have no more time. That company is now rolling out its application blind, and hoping for the best.

Second, be sure to make realistic assumptions about your environment. One of the worst mistakes an organization can make is to extend the effort to conduct performance and quality of service testing, but fail to model the scenarios in its business that closely resemble the real-world situation the application will likely face.

The caveat is that most companies really don't know what their real application usage patterns look like—even with applications already running in their environment. Still, you need to model as closely as you can to your real condition. If you can get production information on usage-patterns, you need to feed that information into your testing. And if it's a new version of an existing application you’re developing—where you already have users exhibiting usage profiles—be sure to get those profiles and use them in your test planning.

IT-business policy disconnection
Another reason why many IT initiatives fall short is that organizations fail to connect IT with business policy tightly enough, and early enough, in the process. How the application will perform the most important business transactions must be clearly understood early on. This will ensure that, as testing begins, the most business-critical aspects of the application are focused on first. This monitoring and testing also needs to continue through the production rollout and continue into post-production, to make sure the most critical service levels are not only measured, but continuously addressed, by reliable mechanisms that always assure acceptable performance.

No matter how well organizations plan, model, and test, there always will be unforeseen circumstances during new initiatives. There will be the unforeseen consequences when the application gets into the real world. Take Service Oriented Architectures; for example, where developers create a service and diligently test its usage patterns, it doesn’t take long after the rollout before that service will be used by end-users in unanticipated ways. That is the very essence of componentized architectures. That’s why today it’s more crucial than ever to put into place a continuous testing and monitoring infrastructure so that you can respond very rapidly to changes in your environment. This is a very important key to your ongoing success.

Outsourced projects face their own unique sets of challenges that often can lead to failure. During the integration and delivery of outsourced application projects, it’s often the in-house IT team that conducts the integration and acceptance testing. The problem is: they’re testing an application that was developed by the outsourcer. The internal teams don’t understand the nuances of the application; they don’t know the code, and don’t have any real visibility into the way the application’s transactions behave. Thus, it is very frustrating and extremely challenging for them to fix underlying issues during the testing and evaluation phase.

Identifying problems
Because so few systems are stand-alone today, one of the keys to optimize an outsourced project is to quickly determine whether any performance problems lie with the part of the application that was outsourced, or if it’s an aspect of the application that is under your organization’s control to fix, such as the way one of your existing systems is interacting with the outsourced application. You can’t afford to waste time and expense to have the outsourcer chase problems that aren’t in their part of the transaction path.

As the outsourced application is being developed, be sure to conduct tests often. It’s important to attain resource consumption metrics. With these metrics in hand, you can now conduct tests, because you are unfamiliar with the application’s internals, you couldn’t conduct before.

Throughout the project, be sure to establish clear performance baselines and effective communication channels to the development team. The gap between application developers and testers within an organization is already steep—that gap is clearly more pronounced with an outsourcer.

It all sounds so simple. But it’s not. If it were simple, more than 50 percent of application development projects wouldn’t fail. And the truth is: they don’t have to.

biography
Motti Tal is executive vice president for Marketing and Business Development at OpTier.

Editorial standards