ChatGPT won't enjoy same protection under Section 230 as social media, expert says

Unlike Meta and Twitter, ChatGPT may be found by the courts to in fact "develop" content, stripping it of the protection against liability of Section 230 of the US Communications Decency Act.
Written by Tiernan Ray, Senior Contributing Writer
Reviewed by Kelsey Adams
Silhouette of hand holding phone with OpenAI logo in front of laptop screen showing ChatGPT
NurPhoto/Contributor/Getty Images

OpenAI's wildly popular ChatGPT program won't receive the same protections enjoyed by social media when it comes to legal responsibility for content, according to an analysis by University of North Carolina at Chapel Hill scholar Matt Perault posted Thursday on the blog Lawfare.

"Courts will likely find that ChatGPT and other LLMs are information content providers," wrote Perault, referring to the "large language models" that power ChatGPT and many similar natural language AI programs. 

Also: What is ChatGPT and why does it matter? Everything to know

"The result is that the companies that deploy these generative AI tools -- like OpenAI, Microsoft, and Google -- will be precluded from using Section 230 in cases arising from AI-generated content," Perault predicted, referring to Section 230 of title 47 of the US Code, a section added to the Communications Decency Act, part of the Telecommunications Act of 1996 passed by Congress.  

Section 230 has been used as a shield by Meta and other Internet companies to excuse themselves of legal responsibility for content posted by users.

As Perault explains, "Under current law, an interactive computer service (a content host) is not liable for content posted by an information content provider (a content creator)," as "Section 230 stipulates that "[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."

That has led to consternation among US legislators on both sides of the aisle who are displeased with the content moderation policies of Meta and Twitter and other companies for one reason or another, Perault noted.

OpenAI's ChatGPT is likely to not get the protection of Section 230, Perault believes.

According to Perault, "The relevant question will be whether LLMs 'develop' content, at least 'in part'."

He continued, "It is difficult to imagine that a court would find otherwise if an LLM drafts text on a topic in response to a user request or develops text to summarize the results of a search inquiry (as ChatGPT can do). In contrast, Twitter does not draft tweets for its users, and most Google Search results simply identify existing websites in response to user queries."

In Depth: These experts are racing to protect AI from hackers. Time is running out

The result is that, "Courts will likely find that ChatGPT and other LLMs are excluded from Section 230 protections because they are information content providers, rather than interactive computer services."

If Perault is right and ChatGPT, and other generative AI tools, don't end up enjoying Section 230 protection, "the risk is vast," he wrote. "Platforms using LLMs would be subject to a wide array of suits under federal and state law," and "would face a compliance minefield, potentially requiring them to alter their products on a state-by-state basis or even pull them out of certain states entirely."

Editorial standards