FTC investigates ChatGPT-maker OpenAI for possible harm to users. Here's what you need to know

The government agency is asking OpenAI to provide details on how user data is being collected, utilized, and protected.
Written by Sabrina Ortiz, Editor
holographic data creative image
Yuichiro Chino/Getty Images

OpenAI's ChatGPT has become incredibly popular since its launch, making it the fastest-growing application of all time. With this many users, questions have arisen about how user data is being collected, utilized, and protected -- and now the Federal Trade Commission (FTC) wants answers. 

In a 20-page FTC document obtained by The Washington Post, the government agency asks OpenAI to present documentation on nearly all aspects of its large language model, with a special focus on OpenAI's user data handling and ChatGPT's output of false statements. 

Also: Train AI models with your own data to mitigate risks 

According to the document, the FTC is investigating whether OpenAI is engaging in "unfair or deceptive privacy or data security practices" or if it has engaged in "deceptive practices" that could risk harm to consumers. 

The FTC is seeking detailed explanations on how OpenAI obtains its data, how that data is used to train the model, and the procedures in place to assess risk and safety. 

In addition, the FTC is also asking OpenAI to disclose what steps it has taken to mitigate the risk of LLM generation of "false, misleading or disparaging" statements about real individuals. 

These investigations follow several incidents with ChatGPT that have caused concerns and even provoked lawsuits against OpenAI. 

For example, on March 20 there was a data breach that exposed ChatGPT users' conversations and information on payments by subscribers. The breach highlighted the potential risks of using AI tools and even led Italy to ban ChatGPT as a whole, although the ban has been lifted. 

Also: Most workers want to use generative AI to advance their careers but don't know how 

Other instances include two class action lawsuits filed against OpenAI. One of the lawsuits against OpenAI claims that the AI company has been using "stolen data" from customers to train and develop its products. 

The other lawsuit relates to ChatGPT's hallucinations and its ability to produce false statements about people. In OpenAI's first defamation case, a Georgia radio host sued OpenAI after finding that ChatGPT allegedly was spreading false information about him, accusing him of embezzling money. 

Editorial standards