X
Business

Escaping decades of hidden app development inefficiency and expense

If you're writing code, pay closer attention to the major time-sink of application problem resolution and how it affects application release schedules, quality, functionality, and ultimately their company’s bottom-line says , BMC Software's Doug Laney.
Written by Doug Laney, Contributor
Commentary--Have you ever pondered the time and energy your development team spends on documenting, recreating, and attempting to resolve software bugs and reported issues (i.e. “application problems”)? If so, you’re quite unique. Those that focus squarely on the effort of writing code should consider paying closer attention to the major time-sink of application problem resolution and how it affects their application release schedules, quality, functionality, and ultimately their company’s bottom-line.

The decades-old ingrained process for manually and iteratively resolving application problems dramatically stymies the productivity of development organizations, yet most executives have little to no understanding of the extent to which this commonly-accepted process is affecting their IT teams and their businesses. This was revealed however in an eye-opening Forrester Consulting study, commissioned by BMC Software: “The Business Case for Better Problem Resolution.” Forrester Consulting conducted anonymous interviews with over 150 application development managers and executives throughout North America in which it elicited rare insights into the manual steps and cumulative hidden costs associated with the process of application problem resolution.

The study concluded that, for most organizations, problem resolution is a highly inefficient process. Developers expend an alarming amount of time—nearly a third of their workday—identifying and trying to recreate problems that either 1) they discovered during unit testing, 2) were submitted by the pre-production test/QA team, or 3) are escalated by application support. Testers are bogged-down similarly with documenting problems that they encounter and by the frequent back-and-forth communication with the development team about particular problems.

The end result: problems take far too long to document and resolve--much longer than management realizes. According to the Forrester Consulting study, it takes an average of six days to resolve a single application problem, with 11 percent of problems taking more than ten days to resolve. Of course, this varies widely by the nature of the issue and the specific application, but regardless, this excessive amount of time can create a chain reaction of resultant business issues. The time spent on identifying and trying to recreate a problem alone can cost a good deal of money, and that’s if the problem is even reproducible. Additionally, management should tally the increased costs of development resources, reduced IT team productivity, and disruption of revenue generating activities. Then there are the soft costs: reduced customer satisfaction from long time-to-resolution cycles, slower time-to-market, quality versus functionality tradeoffs, as well as damage to the company’s brand if word of a major production issue hits the streets.

Unfortunate, uncomfortable, and unprofitable tradeoffs
When developers are distracted and bogged down by trying to identify the root cause of a problem, they are no longer focused on core development activities that truly add business value. And when testers are spending time manually gathering problem information and documenting problems, they are no longer focused on uncovering application issues prior to release. This drain on resources results in unfortunate and measurable tradeoffs between release dates, software stability, software performance, software usability, and software functionality. Think about how often in your organization release dates slip, planned features are deferred, and even known application issues are allowed to be released into production. The more inefficient the problem resolution process is, the more painful, visible, and costly these tradeoffs become. Imagine the impact this can have on customers, shareholders, and business partners if they have little confidence in promised delivery dates.

Dude, where's my code?
While they’re busy trying to write new code, developers are constantly bombarded by application problems from at least three sources. First, developers often discover unexpected application behavior as they’re coding. In addition, developers must put aside their coding to work on resolving application issues discovered by test/QA organization that may or may not be related to something they coded. And finally, when issues are escalated from application support groups, it’s usually a drop-whatever-you’re-doing situation to deal with a very unhappy customer. In each of these cases, new development grinds to a halt until they can determine the problem’s root cause (or worst-case, a way merely to make the symptom go away) then repair it. Any developer will tell you that fixing a problem is easy once the root cause is determined. Thus, ineffective problem resolution processes drive-down productivity for developers in particular. Beyond impact to the developers and new application logic, there are testers, help desk engineers, operations managers, IT executives and end-users themselves, who are often pulled away from core responsibilities when application problems go unsolved for too long.

Are your application testers prolific documenters or prolific testers?
The reason testing costs run so high is inefficiency in the core testing process. First, according to the Forrester Consulting study, it generally takes an average of an hour to create just one problem report. This means the average tester can only document, let alone uncover a measly eight problems in any given day. The problem report they create typically requires manually gathering and documenting information such as:

• Written description of the problem
• Steps to recreate the problem – every click and keystroke
• Screenshots of the application at each step leading up to and including the problem
• System and environment information
• Dumps and snippets of any available server and application logs

This time and expense adds up quickly. The more time testers spend documenting problems, the less time they have to discover them, which means either more application problems go unnoticed before the code is released, releases must be delayed or testing teams must be expanded.

The root cause of user angst
A primary way customers evaluate software vendors is on how promptly they resolve reported issues. If a problem is serious enough or festers long enough to disrupt the customer’s business or impact its own employees’ productivity, this can cause the vendor to lose future business, incremental revenue such as maintenance renewals and related services, and invaluable customer referrals. As in the case of service level agreements (SLAs), some costs are even more immediately felt. Increasingly, software vendors take a direct hit to the pocketbook for violating performance, up-time, or other agreed-upon service metrics. Longer-term, endemic customer satisfaction woes can cause ill-will that may irreparably harm a vendor’s brand. Even when your “customers” are end-users within your own company, the IT department risks a tarnished image and business performance can suffer when applications are buggy and/or service levels aren’t met.

Shifting from manual to automatic problem resolution
Few companies have an automated way to collect detailed, synchronized information in a meaningful way when an application problem arises. This results in help desk representatives spending time eliciting anecdotal evidence from users about their experiences, product support piecing together clues from disparate hardware and monitoring systems, and testers left trying to record their steps to determine exactly which build they have tested. All the effort that goes into pulling together this data often does not prove adequate enough to solve the problem. Developers receive disconnected, unsynchronized bits of information from server logs, live conversations, and other sources. But this rarely provides the context needed to identify the root cause from among the countless elements involved with application behavior. From this incomplete information, developers must then recreate the problem. This is easier said than done. There may be numerous differences between the development, test, and production environments and that at a customer site. Thus, it is not surprising that those interviewed by Forrester Consulting reported that on average, 25 percent of problems are not reproducible.

This iterative, haphazard and time-consuming process could be cut down dramatically with an automated problem resolution solution that captures and collates detailed information about the application and environment at the time the problem occurred (not afterward). Yet because this manual process is so ingrained and overlooked in most development organizations, management unconsciously maintains the status quo.

Calculating the return on automating problem resolution processes
Once management teams assess the true costs of inadequate manual problem resolution, it becomes fairly straightforward to justify investments in an automated problem resolution solution. Today’s solutions for automated application problem resolution enable both developers and testers to maximize their value to their organizations by simply coding more and testing more. Benefits to a company are myriad but a basic efficiency ROI calculation is straightforward.

As the Forrester Consulting study confirmed, developers spend an average of 29 percent of their time on problem resolution while testers and support personnel gather and communicate over six distinct pieces of information on each application problem. So, for a team of 100 developers, 50 testers and 50 support engineers that leverage an automated problem resolution shown to improve developer’s problem resolution efficiency by 50 percent and test/support documentation efficiency by 75 percent, the savings can be significant:

Assumptions:
• Developer rate: $100,000/yr
• Tester/QA Engineer rate: $50,000/yr
• Test/QA engineer submits 6 application problems per day
• Support engineer escalates 2 application problems per day
• Developer finds root cause of problem 50 percent faster via automated, comprehensive, synchronized application problem documentation and fewer problem “round trips”
• Test/Support engineer documents problems 75 percent faster (1 hour vs. 15 minutes) by automatically capturing complete, integrated information about the application problem as and where it occurs ROI Calculation:


Merely by automating its application problem resolution processes, this moderately sized application development team can reallocate over $3M to develop new applications or functionality, improve quality, or release applications faster. And these are only the hard savings.

Conclusion
The Forrester Consulting study revealed that nearly one third of managers grossly underestimate the time their teams spend on application problem resolution. However, there is no question that the inefficiency of current methods is causing a huge burden. Ultimately, once businesses make the connection between problem resolution and the hard and soft costs associated with lost development cycles, it’s hard not to get on board with a more efficient means of identifying and resolving application problems.

biography
Doug Laney runs BMC Software’s global application development optimization professional services practice. His organization specializes in implementing the AppSight solution for automating the cumbersome and costly process of application problem resolution at major enterprises and ISVs.

Mr. Laney is an accomplished practitioner and internationally recognized authority on information management, analytics and data quality. He is considered a pioneer in the field of data warehousing, and developed the industry’s first comprehensive data warehouse/business intelligence project methodology, ITERATIONS (acquired and offered today by IBM). He is a frequent author and speaker at industry events, has advised and consulted to hundreds of organizations on these topics, and has sat on a number of software company advisory boards.

Editorial standards