OpenAI pulls its own AI detection tool because it was performing so poorly

When OpenAI rolled out its AI detection tool earlier this year, its creators called it 'imperfect.' That was apparently generous.
Written by Artie Beaty, Contributing Writer
OpenAI logo on phone
Future Publishing/Contributor/Getty Images

When OpenAI debuted an AI detection tool less than six months ago, it admitted the feature designed to help users spot text written by artificial intelligence was "imperfect." Now the company has quietly pulled the feature due to its "low accuracy." 

The announcement came in the form of an update to a January 2023 blog post where OpenAI announced the feature. A note at the top of the post now reads, "As of July 20, 2023, the AI classifier is no longer available due to its low rate of accuracy." The update went on to say that the company was "currently researching more effective provenance techniques for text."

Also: How to use ChatGPT to write an essay

The feature, which was free to use, worked by taking text and assigning it to one of several categories. The AI would then analyze those categories and assign a ranking from "very unlikely" to "likely" written by AI.

When the tool debuted, Lama Ahmad, policy research director at OpenAI, told CNN that OpenAI didn't recommend using the tool in isolation because the company knew "it can be wrong and will be wrong at times." That may have been an understatement.

A study from earlier this month showed that AI detectors were especially bad at discerning content written by people who didn't speak English as their first language, with an average miss rate of 61%. One program in this study, however, incorrectly flagged a whopping 97% of human-written essays.

Earlier this year, ZDNET tested several AI detection tools and found similar results -- the tools were fairly inaccurate and easy to trick. 

Also: These are my 5 favorite AI tools for work

Other AI detectors do exist, but OpenAI taking its tool away from the public shows how challenging it is to accurately detect AI. And it's easy to see how this could go wrong. In an education or professional setting, unfairly accusing someone of plagiarism could have dire consequences. 

As AI continues to progress, fears over its use to write essays and complete work are legitimate. But until detection tools improve significantly, we'll have to stick to policing it with human eyes.

Editorial standards