X
Innovation

OpenAI could 'cease operating' in EU countries due to AI regulations

While AI experts call for AI regulations, OpenAI is considering exiting a continent that answered that call.
Written by Maria Diaz, Staff Writer
WASHINGTON, DC - MAY 16: Samuel Altman, CEO of OpenAI, testifies before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law May 16, 2023 in Washington, DC. The committee held an oversight hearing to examine A.I., focusing on rules for artificial intelligence. (Photo by Win McNamee/Getty Images)

Samuel Altman, CEO of OpenAI, testifies before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law May 16, 2023 in Washington, DC.

Win McNamee via Getty Images

Sam Altman, OpenAI CEO, said the company could stop all operations in the European Union if it can't comply with impending artificial intelligence regulations. 

During a stop of what he's called the "OpenAI world tour," Altman addressed the University College London to speak about the company's advancements and was asked about the EU's proposed AI regulations. The CEO explained OpenAI has issues with how the regulations are written at this time. 

Also: 3 ways OpenAI says we should start to tackle AI regulation

According to Time, the regulations, which are still being revised, may designate the company's ChatGPT and GPT-4 as "high risk," requiring increased safety compliance. 

"Either we'll be able to solve those requirements or not. If we can comply, we will, and if we can't, we'll cease operating…," Altman said. "We will try. But there are technical limits to what's possible."

Altman explained he didn't believe the law was fundamentally flawed but stressed that the details were critical. He expressed support for a balanced approach to regulation but acknowledged the risks of AI, in particular for AI-generated misinformation with the potential to sway public opinion. 

Also: Meet Aria: Opera's new built-in generative AI assistant

On this, however, Altman maintained that AI language models are less influential in spreading misinformation compared to social media platforms. "You can generate all the disinformation you want with GPT-4, but if it's not being spread, it's not going to do much," he added. 

Protesting AGI development

During his appearance at University College London, Altman was greeted by protesters gathered outside. The group expressed concerns over OpenAI's role in the future development of artificial general intelligence (AGI), a super intelligent AI system that could surpass human intelligence.

The group held signs that read "Don't build AGI" and "OpenAI, stop trying to build AGI," as shared by Twitter user James Vincent. This came after a statement from OpenAI on the development of artificial general intelligence, the company's view of AI regulation, and what consequences the lack of regulation could have on humanity.

Also: AI is coming to a business near you. But let's sort these problems first

"It's time that the public step up and say: it is our future and we should have a choice over it," protester Gideon Futterman told Time. "We shouldn't be allowing multimillionaires from Silicon Valley with a messiah complex to decide what we want."

The creation of AGI is a hot topic among experts and ethicists, though still considered to be far from becoming reality. 

Editorial standards