Acutest is an independent testing consultancy which works primarily with blue-chip clients, and claims that 50 percent of testing done on technology-centred change projects is wasted.
ZDNet UK spoke with the company's chief executive Barry Varley about the role of testing in software projects.
Q: You've conducted surveys of IT professionals and their approach to testing. What have you found?
A: There's a huge variability in the answers they give, and a massive discrepancy between what they think they've done and how well they have done. In one simple exercise, they thought they'd scored around 60 percent when the actual result was 16 percent. Another area where there's a great difference between perceptions and reality is in timing. When we asked how long do you think a particular test would take to do in real life there was a massive underestimate — on average, by 50 percent. Everyone is surprised — everyone thinks it's much simpler than it is.
So why didn't people learn from the Y2K experience, when testing became a major focus for everyone?
With Y2K, everyone thought it was a one-off. It was approached as a special case, something that would never have to happen again. Nobody applies the lessons learned.
So why do you think so much testing is wasted?
We looked at a lot of test areas and put them on a graph where the impact of failure of a particular area was plotted against the likelihood of it happening. The vast majority of testing was concentrated in areas that were unlikely to fail and wouldn't have much of an impact if they did. People don't look at testing in a risk-based way — they need to do far more risk analysis. Instead, they do things like test in order of component delivery, something that needs little planning but doesn't reflect the relative importance of each component.
Where are the major failures in methodology?
People don't involve the business in testing, and don't involve testing in the design. They're also oblivious to ambiguities in the specification, especially when it comes to offshoring. Context is lost. People say "when it comes back, we'll test it", but what you get back is not what you were expected. Also, a whole bunch of business changes are left too late, when they should be integrated into the design and testing as soon as possible.
Do you see things getting better?
There are changes. Governance demands hard information during a project, which testing is a part of. Businesses are seeing good testing and better involvement of the business as a differentiator, doing more business process testing, which has never been done before. And by looking at points of failure, you can identify risk areas. Nobody's got time to test everything, so have to do risk analysis, and that means you can concentrate not just testing but design and implementation effort where it does most good.
We've seen projects where 500 people were involved in testing. It wasn't their main job, they were brought in from all sorts of areas, but they brought their expertise in for the areas that affected them. Many projects have a large teams of external testers brought in — that will diminish. And people are learning to monitor errors, to analyse what happens during operation.
We saw one bank that launched a big new service with a bunch of forms, but they had no time to check the forms before the project went live. Suddenly, they discovered that there were a load of requests from their Birmingham office for information in Welsh. It turned out that the Welsh tick box was too close to the 'no more info' tick box. Monitoring the errors showed this."
Testing can provide information. It's about clarity — it provides clarity and confidence. You can use information from the exercise for all sorts of things, such as focusing resources in problem areas.
If you don't test, you must understand the risk. And the more you align testing with your business plans, the more you get out of it
What should smaller companies do, if they're not in the market for blue-chip consultancies?
The inability for management teams to scope and assess the testing requirements accurately when introducing new systems and applications is typical in any organisations regardless of size. But for SMEs, a flawed testing framework impeding IT system and applications coming in on time and on budget is perhaps even more costly.
The key to effective testing for SMEs is raising awareness of testing techniques at a management level and quickly giving staff involved the skills required to do the job effectively. A skill injection could come as part of an investment in training course or in recruiting a new member to the company with the skills/experience to fill the gap.
The cost of using external suppliers/experts for testing is usually a barrier. Training and raising internal skills and capabilities is a very realistic solution, but SMEs should also think about analysing how the reductions in development time and support activities that come with effective testing outweigh the cost."