Software testing lacking serious effort

Software testing lacking serious effort

Summary: Understanding impact of software failure on business an essential step in planning and doing testing, but companies typically seldom make the effort, leaving it to last minute.


In today's digital era, prior testing before software goes live is imperative, as its failure could damage a company's business and reputation. Considering fleeting brand loyalty today, negative user reactions can lead to user migration as well.

These potential consequences carry more significance than the technical malfunction, said observers who unanimously stressed the importance of software testing.

The approach of "test early, test often" is unfortunately not adhered to by most companies, and test efforts are usually left to the last minute, said Jeff Findlay, senior solution architect, Asia-Pacific and Japan at Micro Focus. Companies hence struggle to test critical functionalities and transactions before the software's release date.

Ray Wang, principal analyst and CEO of Constellation Research, added that even if companies acknowledge that getting software right is critical, they do so in "faster, better, cheaper" mode. This limits the possibility of software failure but also the foresight and preparation for unexpected glitches in real life.

"Testing has to be elevated into a science and not an art", Wang said, adding that only a handful of companies actually get this point of integrated, agile development. These companies build test plans side by side with functional specs, which takes more time to plan, but results in fewer bugs and errors, the analyst said.

"Just because you've counted all the trees doesn't mean you've seen the forest," said Rameshwar Vyas, CEO of Ranosys Technologies which offers software testing services. Companies should always bear this in mind while making a testing roadmap for any software release. Change management and risk management are also integral parts of the overall plan, so that all test cases regarding what the software is not supposed to do are covered as well, he advised.

Findlay emphasized that companies must ensure that what they test meets business requirements, which should be clearly defined, validated by all the stakeholders and kept in a central repository. Companies usually write disparate documents describing requirements differently and multiple times, resulting in a variety of interpretations which in turn leads to application failure and expensive rework, he explained.

These comments come after U.S. trading company Knight Capital Group made a trading loss of US$440 million after a botched software update made several erroneous orders on the New York Stock Exchange (NYSE) last Wednesday--and had to be bailed out with a financial lifeline. Barely a week later in Asia, the Tokyo Stock Exchange (TSE) experienced a computer failure in its backup systems which halted derivatives trading for 95 minutes on Tuesday.

How much is enough
How companies should gauge the amount and length of software testing to avoid such incidents boils down to how they associate the risk with a particular functions of the application, said Findlay.

It is important to understand the impact of failure on the business, and working backwards, mitigate those risks by ranking each test as early as possible so as to align these tests with the business requirement that drives them, the Micro Focus executive explained.

When transactions are critical to business success, they must be thoroughly tested from both a functional and nonfunctional perspective, he noted. This is to see if the application correctly performs the transaction in a timely fashion for all users, regardless of how they access the application.

Tests associated with non-critical aspects of an application don't require the same rigor and less testing is acceptable, since the business will not be seriously impacted in the event of failure, he added.

Even after the software goes live, continuous and regular testing of its critical transactions is important, Findlay noted. To facilitate this, automated tests should be developed as early as possible and re-run regularly to ensure development efforts don't break the application. These automated tests should cover the developer's code, functionality of key transactions and application performance, he advised.

Topics: Software Development, Enterprise Software

Jamie Yap

About Jamie Yap

Jamie writes about technology, business and the most obvious intersection of the two that is software. Other variegated topics include--in one form or other--cloud, Web 2.0, apps, data, analytics, mobile, services, and the three Es: enterprises, executives and entrepreneurs. In a previous life, she was a writer covering a different but equally serious business called show business.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • Testing as an Art or Science?

    Great article, you mention above that testing is a science, not an art. It's interesting, some in the Exploratory or Session Based Testing world would argue that it can be more of an Art than a science. It depends upon the level of knowledge and expertise your tester has. Using a 3rd party (or offshore firm) as your testing who may not have the deep knowledge of the app and the domain will require a more scientific approach versus having expert users who use a more Exploratory Approach (which does have some structure and methodology). Both can be effective, you just have to have the right tools, processes and methodology in place depending upon the type of testers you have and testing you want to do.
  • Use proven methodology

    There are some companies that are highly successful testing software and do an excellent job minimizing serious bugs. Because each environment is unique, and customers might try to use a product in ways not imagined by the developer, there will always be post-production patching and bug fix. No matter how good a Q&A department is, it is still more limited in resources than the combined customer base (one would hope, anyway).

    Where many companies fail is not using proven testing methodology, or simply cutting Q&A short in the original project plan. A good methodology involves a managed plan where testers are provided scripts and specific instructions in how to test and what to look for. A lot of consumer-grade software often just does "open testing", unmanaged where beta customers play with software and hopefully provide feedback on what is broken. Results from such a plan are often inconsistent and incomplete; reflected in a final product that shows a lack of diligence.
  • The problem is too many companies stress so much about

    releasing features now that forget about testing, quality assurance, code review, design, etc. Quickly they end up with unmaintainable spagetti that only works by luck and magic.
  • If we only gave enough money...

    the the people who blew it and let a software rev out before it was ready, then all will be fine.

    This is how a cell phone is created. The design guys come up with a plan. The engineers make up a CAD plan, then they make a thousand for testing. When that passes, they send the phone to software, who send it to the lab to reverse engineer which lines so what so they can start writing software for it. When that's done in a month or so, they begin writing the software.

    One day, someone asked "Hey, didn't we make this phone? does anyone know what line does what already?" You guessed it, they did. However, it was too late for the company. They're no longer designing phones in the USA.

    The moral of the story? A bureaucracy working harder with more money always looses to just having brighter people without a bureaucracy.
    Tony Burzio
  • "Open Testing" & Accelerating Test Plan Driven Testing

    I've read recently that Facebook doesn't have any testers in the company. They expect their developers to do testing and also rely upon a HUGE beta community. My guess is this works well with more consumer oriented apps.
    Where you do use more structured testing approaches - I think the big challenge with Agile development is timelines are accelerated and Testing gets squeezed. So, how do you test smarter or get your existing testing organization to be more efficient?
  • Article conveys no information

    There's a difference between talking about software testing and conveying information about software testing. This article looks more like search engine spam than anything else. Lots of words and phrases about software testing, but no real information conveyed.
    • It's a blog

      If it were actual news, there would be information involved.
  • Thanks for highlighting importance of QA
  • PushToTest is different!

    Hey all,
    I'm an intern at PushToTest. I can see you're all frustrated with software testing. I encourage you to try our product. TestMaker enables you to run the test by yourself. Check us out, we're always looking for constructive criticism.

  • Testing is necessary for beneficial implementation

    I am currently working with a software testing company, and I have personal experience with software testing as well. Software testing is one of the important steps of software engineering. You need to take it with care to remove all of the bugs and problems you can face during implementation process. Most of the software fail to implement just because of such issue.
    Muhammad Junaid Iqbal