Researchers at Google recently released a paper detailing a new CAPTCHA system consisting of correct image rotation (Socially Adjusted CAPTCHAs) whose main purpose is to make it easier for humans, and much harder for bots to recognize them. But with the emphasis of this and many other research papers on "bots vs CAPTCHAs", the research excludes a growing trend to which the new approach -- if implemented -- would actually make the new CAPTCHA much more efficiently abused than the previous one.
How come? Despite the persistent attempts by malware infected hosts to recognize CAPTCHAs, at the end of the day, a data entry team capable of solving 200,000 CAPTCHAs and charging $2 per 1000 entries ultimately drives the CAPTCHA solving economy.
A lot has changed since the factual research detailing "Inside India's CAPTCHA solving economy" was published last year.
In February this year, a novel approach was introduced by a Russian boutique vendor of CAPTCHA solving services - a community-driven revenue sharing scheme for CAPTCHA breaking. The concept is mimicking reCAPTCHAs ease of implementation and ubiquity, but with a mean perspective in mind. It allows webmasters to not only implement CAPTCHA solving forms at their registration pages, but is offering idle forum/community members the opportunity to solve CAPTCHA and earn revenue in the process, with the successfully solved CAPTCHAs fed into their system fulfilling yet another bulk request for bogus account registration.
A practical example of how these human networks efficiently exploit CAPTCHA systems originally designed to fight bots, and facilitate cybercrime in the process, is the social networking worm Koobface (Koobface Facebook worm still spreading; Dissecting the Latest Koobface Facebook Campaign; Dissecting the Koobface Worm's December Campaign; The Koobface Gang Mixing Social Engineering Vectors).
Koobface is eating every social network's internal CAPTCHA barrier for breakfast not because the Koobface gang is taking advantage of CAPTCHA recognition algorithm, but because it's relying on CAPTCHA solving services.
human networks and bots clearly converging (see graph), Sergei also discussed a very pragmatic solution on defeating Koobface back then - injecting a large number of successfully accepted CAPTCHA images to Koobface's command and control server, have them resolved by the CAPTCHA solving vendor, and the bill sent to the Koobface gang :
"In the real test, Facebook.com asked the Koobface to resolve the CAPTCHA image that reads "suffer accorn" - this image was pretty noisy for image recognition algorithms to resolve it successfully. But Koobface does not attempt to resolve it by itself. It submits this image to its C&C server. The server replies correct answer in about 34 seconds. Once the answer is received, Koobface submits the message via Facebook's compromised account including correct CAPTCHA answer."
"Detailed analysis of traffic between Koobface and its command-and-control server allowed tapping into its communication channel and injecting various CAPTCHA images in it to assess response time and accuracy. The results are astonishing – the remote site resolved them all.
But here is a twist: uploading a large number of random CAPTCHA images into its communication channel will load its processing capacity, potentially up to a denial-of-service point. Well, if not that far, then at least it could potentially harm its business model, considering that the cost of resolving all those injected images would eventually be paid by the Koobface gang."
The ongoing arms race is not between bots vs CAPTCHAs, its between human networks efficiently exploiting networks aimed to originally distinguish between humans and bots. No CAPTCHA can survive a human, since it was originally meant to be recognized by one, and therefore making it easier to be recognized by humans like in Google's recent experiment, ultimately makes it easier for the CAPTCHA solving economy to scale.
CAPTCHA is in pain, humans are slowly killing it not bots. What do you think?