X
Innovation

ChatGPT's latest challenger: The Supreme Court

The Supreme Court on Tuesday considered whether algorithmic recommendations are shielded from legal liability. The question could have big ramifications for tools such as ChatGPT or the new Bing.
Written by Stephanie Condon, Senior Writer
Steps to the United States Supreme Court
Image: Getty Images

The technology industry is taking center stage at the Supreme Court this week, where the justices are considering two cases that could profoundly change how the internet works. Both cases are related to the issue of liability -- whether technology platforms are responsible for the noxious content they host and sometimes algorithmically promote.

During oral arguments on Tuesday, Supreme Court Justice Neil Gorsuch pondered what it all means for the latest algorithmic innovation to take the tech world by storm -- generative AI that can offer recommendations to people. That trend encompasses conversational chatbots, including ChatGPT and Microsoft's new Bing chatbot.

In Depth: These experts are racing to protect AI from hackers. Time is running out

The question arose during oral arguments for Gonzalez v. Google, which specifically asks whether platforms -- such as Youtube, TikTok, or Google Search -- can be liable for the targeted recommendations offered up by their algorithms. The case was filed by the relatives of a 23-year-old American woman, Nohemi Gonzalez, who was killed in Paris in 2015 when three ISIS terrorists fired into a crowd of diners. 

Gonzalez's relatives alleged that Google, which owns YouTube, knowingly permitted ISIS to post radicalizing videos on YouTube, inciting violence and recruiting potential supporters. Beyond permitting the videos to be published, the complaint alleges that Google affirmatively "recommended ISIS videos to users" via its recommendation algorithm. 

Also: What is ChatGPT? Everything you need to know

The case comes down to whether algorithmic recommendations -- when YouTube suggests what you should watch next, or ChatGPT tells you where to go on vacation -- are protected under Section 230 of the Communications Decency Act, part of the Telecommunications Act of 1996. This law exempts online platforms from liability for content posted by third parties. 

While the case is about social media recommendations, Gorsuch linked the discussion to generative AI, as the Washington Post noted. As the Post's Will Oremus reported, Gorsuch suggested generative AI would not be eligible for protection under Section 230. 

"Artificial intelligence generates poetry," Gorsuch said. "It generates polemics today that would be content that goes beyond picking, choosing, analyzing or digesting content. And that is not protected. Let's assume that's right. Then the question becomes, what do we do about recommendations?"

This case could have major ramifications for companies like Google and Microsoft, as they attempt to integrate conversational chatbot recommendations into their search engine platforms. 

Also: Bing's AI chatbot has a new chat session limit, again

Of course, the internet has changed dramatically since Section 230 was written in 1996. Lawmakers have for the past few years struggled with the question of how to modify the law. While social media sites have primarily come under fire for testing the limits of the law, the integration of chatbot recommendations into search engines will, undoubtedly, raise more liability questions.

Editorial standards