Academics from UK and China have developed a new machine learning algorithm that can break text-based CAPTCHA systems with less effort, faster, and with higher accuracy than all previous methods.
This new algorithm -developed by scientists from Lancaster University (UK), Northwest University (China), and Peking University (China)- is based on the concept of GAN, which stands for "Generative Adversarial Network."
GANs are a special class of artificial intelligence algorithms that are useful in scenarios where the algorithm doesn't have access to large quantities of training data.
Classing machine learning algorithms usually require millions of data points to train the algorithm in performing a task with the desired degree of accuracy.
A GAN algorithm has the advantage that it can work with a much smaller batch of initial data points. This is because a GAN uses a so-called "generative" component to produce lookalike data. These "generated" data points are then fed to a "solver" algorithm that tries to guess the output.
As these two GAN components are pitched against each other, the solver gets better, as if it would have been trained with millions of data points.
Also: 61% of businesses have already implemented AI TechRepublic
UK and Chinese academics applied this very same concept to breaking text CAPTCHAs, which, in the vast majority of previous research studies, have only been tested with classic machine learning algorithms trained with large quantities of initial data points.
Researchers argued that in a real-world scenario, an attacker wouldn't be able to generate millions of CAPTCHAs on a live website or API without being detected and banned.
That's why, for their research, they used only 500 text CAPTCHAs from each of 11 text CAPTCHA services found used on 32 of the Top 50 Alexa websites.
"It takes up to 2 hours (less than 30 minutes for most of the scheme) to collect 500 captchas and less than 2 hours to label them by one user," said researchers. "This means that the effort and cost for launching our attack on a particular captcha scheme is low."
The list of training data, listed in the table below, included text CAPTCHAs from sites like Wikipedia, Microsoft, eBay, Baidu, Google, Alipay, JD, Qihoo360, Sina, Weibo, and Sohu.
Once they've collected and trained their GAN solvers by generating up to 200,000 "synthetic" CAPTCHAs, researchers tested their algorithms against other text CAPTCHAs systems used across the Internet, and which had been previously tested by other researchers in prior academic works.
"Table 4 [see below] compares our fine-tuned solver to previous attacks," researchers said. "In this experiment, our approach outperforms all comparative schemes by delivering a significantly higher success rate."
Researchers said their method was able to solve text CAPTCHAs with a 100 percent accuracy rate on sites like Megaupload, Blizzard, and Authorize.NET. In addition, their method also achieved better accuracy on absolutely all other CAPTCHA systems used on the other 30 sites they tested -which included the likes of Amazon, Digg, Slashdot, PayPal, Yahoo, and QQ, just to name a few.
Besides improved accuracy, researchers also said that the solver component of the GAN algorithm they developed was also more efficient and cheaper than any other approaches.
"It can solve a captcha within 0.05 of a second by using a desktop PC," researchers said.
This means that attackers won't need to buy and keep paying for expensive cloud computing servers in order to break text CAPTCHAs in real time on websites.
Once an attacker has trained a text CAPTCHA algorithm, they can run it on a regular PC or web server, and launch coordinated DDoS or spam-posting attacks on websites where that CAPTCHA service is in use.
Because the algorithm is also easy to train, even if they encounter a never-before-seen text CAPTCHA, they can train their algorithm to deal with that as well.
"This is scary because it means that this first security defence of many websites is no longer reliable," said Dr. Zheng Wang, Senior Lecturer at Lancaster University's School of Computing and Communications and co-author of the research.
Also: AI means a lifetime of training CNET
Zheng and his team recommend that website owners implement alternative bot-detection measures that use multiple layers of security, such as a users' use patterns, device location, or biometric data.
Earlier this year, Google launched such a service, version 3 of the reCAPTCHA tool, which Google said it relied on machine learning algorithms to discern bots from actual users.
More details about the researchers' work can be found in a research paper entitled "Yet Another Text Captcha Solver: A Generative Adversarial Network Based Approach."
Previous and related coverage:
An executive guide to artificial intelligence, from machine learning and general AI to neural networks.
The lowdown on deep learning: from how it relates to the wider field of machine learning through to how to get started with it.
This guide explains what machine learning is, how it is related to artificial intelligence, how it works and why it matters.
An introduction to cloud computing right from the basics up to IaaS and PaaS, hybrid, public, and private cloud.
- There is no one role for AI or data science: this is a team effort
- It's not the jobs AI is destroying that bother me, it's the ones that are growing
- Facebook bug exposed private photos of 6.8 million users
- SQLite bug impacts thousands of apps, including all Chromium-based browsers
- Logitech app security flaw allowed keystroke injection attacks
- US ballistic missile systems have very poor cyber-security
- Bing recommends piracy tutorial when searching for Office 2019
- Twitter discloses suspected state-sponsored attack
- Microsoft's Edge to morph into a Chromium-based browser TechRepublic
- Microsoft's rebuilt Edge may come to Xbox One CNET