X
Business

Crowdsourced testing, the paid-per-bug payoff

When I first began looking at 'crowdsourced' development as a testing methodology (if we can attribute it with a term that mature), I immediately questioned how different this might be from open source community-driven application development.My immediate suspicion was that it is simply a marketing term being 'bandied' about by vendors who specialise in development disciplines such as application testing or software change management perhaps.
Written by Adrian Bridgwater, Contributor

When I first began looking at 'crowdsourced' development as a testing methodology (if we can attribute it with a term that mature), I immediately questioned how different this might be from open source community-driven application development.

My immediate suspicion was that it is simply a marketing term being 'bandied' about by vendors who specialise in development disciplines such as application testing or software change management perhaps.

So while I was on the sniff around this week, my inbox received a sprinkling from TCL, the UK partner for uTest - a community of 20,000+ ‘virtual’ testers sourced from all over the world to help companies test their mobile, web, desktop and gaming applications. This arrangement, so says TCL, helps build a test proposition that is cheaper and more effective than an in-house offering.

Now here comes the difference... it's not community driven application refinement for the greater good of all that ultimately leads to free software - these testers are paid per-bug.

The company says that this approach typically saves about 30% for customers undergoing testing procedures. Positioned as an extension to an in-house QA (quality assurance) team rather than a replacement, crowdsourcing attracts and creates, according to TCL, an interconnected community of testers who seek fame and recognition within the marketplace based on the quality and quantity of the bugs they are able to find.

Citing a recent example, uTest set loose 1,100 software testers on four major search engines: Google, Yahoo, Bing and Google's Caffeine update. Google had the fewest number of bugs and the least severe bugs among the competition, while Bing amassed the most bugs yet still scored well in the accuracy of its results.

NB: there are also monthly ’Zapper’ events where testers - both pros and amateurs - are invited to come along and test a new software application.)

Arguably I suppose, any goods-producing organisation exercises a degree of quality control or testing before it goes to market. Web pages, software and mobile phone applications shouldn't really be any exception – should they?

TCL’s chairman and all round grande fromage Stewart Noakes is on the record as saying that, “Not only is software testing being revolutionised by the idea of using a global community of professional testers, but the concept has proven to deliver compelling, real world business benefits at a fraction of the costs of traditional software testing.”

“The testers are customer-rated and motivated by a pay-for-performance model. Unlike hiring additional in-house QA personnel or signing long-term outsourcing contracts, crowdsourcing enables massive software testing coverage. Bespoke virtual QA teams with appropriate skills are constructed by environment and demographics such as access device type, application rules, geographic location etc and deliver real-time responsiveness. In fact, a complete test cycle can be run in 48 hours or less,” adds Noakes.

So what do you think? There's not so much reportage out there on this subject so I thought it deserved a passing mention. Could this break the mould for testing as we know it? Could this be one of the resultant developments made more popular during the recession (due to the money saving opportunities it represents) that lives on? This story is not fully told as yet I feel.

Editorial standards