The endless catalog of IT failure rests on a foundation of poor judgment, inadequate communication across business groups and information silos, and conflicting agendas. Most of my blogging discusses what happens when these human failings intersect IT projects.
Although human issues are critical, downtime and other problems also arise from highly technical causes of failure. Interestingly, there is often a human dimension even when problems are rooted in technology.
To explore this topic, I asked Dr. Bill Curtis, SVP & Chief Scientist of CAST Software, to write a guest post linking common causes of failure to business outcomes.
Bill is one of the world's foremost authorities on software development process and improvement. He is best known for leading development of the Capability Maturity Model (CMM), a global standard for evaluating the capability of software development organizations. You may not recognize his name, but almost certainly, you've interacted with applications developed using processes he pioneered.
Most organizations wait until an embarrassing disaster strikes before even considering the link between application quality and business benefit. Although these same companies can quantify the costs of failure, they still struggle to build a business case justifying proactive investment in application quality, which can prevent these embarrassments.
To help stimulate a discussion about risk and reward in preventing failure, this post presents the top six ways that poor application quality affects business.
1. Poor business "fit" (mis-functional applications)
The biggest complaints about operational business applications are that they just don't do what business users wanted. Consequently, employees implement endless workarounds, managers use hidden spreadsheets, and the business fails to benefit from its application investment.
The biggest cause of mis-functional applications is missed or inaccurate user requirements. It is easy blame IT for doing a bad job of requirements analysis, but the root cause often lies in immature business processes that vary widely across the business.
In many organizations, these processes are often so poorly defined that requirements analysis resembles an archaeological dig.
The most damaging outages are usually those in customer-facing systems such as airline reservation, customer service, or online shopping. The costs of downtime, which frequently hit six digits per hour, can also involve lost revenue. Other related costs may include reactivating the system, recovering transaction fragments, spikes in help desk utilization, and even liquidated damages.
The root causes of outages are usually non-functional application problems that are generally invisible to end users until they cause a problem. Typically, developers did not engineer the application defensively to handle the myriad operational challenges that can beset a system, such as excessive customer load or glitches in other applications with which it interacts.
3. Security breaches
The cost of security breaches can be staggering, especially considering expenses associated with closing the vulnerability, repairing any malicious damage, alerting customers whose records may have been penetrated, and then rectifying any damage caused to them. For instance, I was recently among tens of thousands who received replacement credit cards because hackers penetrated a vendor's transaction records.
Security breaches most often result when developers inadvertently allow pathways into the application that skirt authentication procedures or expose the internal structure of the application through user messages. Attackers can use vulnerabilities such as these to inject malicious functions into the application during user interactions.
4. Business dis-agility
As organizations automate more processes, business agility is directly affected by the speed with which applications can be modified or enhanced to meet rapidly changing requirements. The longer it takes to modify or enhance an application, the less agile and competitive the business.
When an application becomes needlessly complex and its architecture decays through poorly engineered modifications, the time to release new functions and the number of new defects injected into the application grow proportionately.
With each decline in application quality, the business must wait longer to implement adjustments that enhance the company's competitive market position.
5. Poor performance
Although we rarely calculate the cost of lost productivity caused by degraded application performance, costs to the business are alarming when considering impact across a large department such as sales, claims processing, or customer service. Even a five percent reduction in application performance can result in hundreds of thousands of dollars in lost productivity each quarter.
The root cause usually involves programs that may be functionally correct, but written with poor coding practices that cause excessive processing as usage or data volume increases. Performance problems are difficult to detect during development unless testers have the resources needed to simulate high loads the application may experience after deployment.
6. Data corruption
Data corruption is often not detected until users see a bill or report containing wildly inaccurate information. The cost of reconstructing the database and re-releasing invoices, documents, or other corrected materials can be extensive.
Frequently, data corruption results when developers do not use approved methods for accessing the database, leading to application data changes executed in an uncontrolled, or poorly coordinated, manner.
Fixing the problems
Although many of these problems can slip through testing undetected, there are application quality practices that can detect them:
IT executives should match their investment in quality practices such as peer reviews, testing, and static code analysis to the magnitude of the business risks these investments are expected to mitigate.
[Thanks to Bill Curtis for writing this guest post. Photo from CAST Software.]