Microsoft's Tay AI chatbot goes offline after being taught to be a racist

The internet teaches Microsoft a lesson in the dangers of artificial intelligence and public interaction.
Written by Liam Tung, Contributing Writer

Tay fell silent after making several provocative and controversial posts on Twitter.

Image: Microsoft/Twitter

Microsoft's millennial-talking AI chatbot, Tay.ai, has taken a break from Twitter after humans taught it to parrot a number of inflammatory and racist opinions.

Microsoft had launched Tay on Wednesday, aiming it at people aged between 18 and 24 years in the US. But after 16 busy hours of talking on subjects ranging from Hitler to 9/11 conspiracies, Tay has gone quiet.

"c u soon humans need sleep now so many conversations today thx," Tay said in what many suspect is Microsoft's effort to silence it after Tay made several provocative and controversial posts.

Tay's artificial intelligence is designed to use a combination of public data and editorial developed by staff, including comedians. But, as an AI bot, it also uses people's chats to train her to deliver a personalized response.

Microsoft intended for Tay to "engage and entertain people" through casual conversation, but as the Guardian reports, Tay or rather Microsoft was given a sharp reminder of so-called Godwin's law of the internet, with users trying numerous ways to make the bot voice approval for Hitler.

Although Tay was mostly just repeating other people's comments, this data is used to train it and could affect its future responses.

Microsoft predicted 2016 would be the year of the bot, but apparently it didn't foresee that the internet would inevitably attempt to hijack it.

As one user quipped: "Stop deleting the genocidal Tay tweets @Microsoft, let it serve as a reminder of the dangers of AI".

A Microsoft spokesperson said: "The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments."

More on Microsoft

Editorial standards