X
Innovation

3 ways OpenAI says we should start to tackle AI regulation

Everyone seems to be in favor of AI regulation, even the biggest leaders in the space.
Written by Sabrina Ortiz, Editor
AI technology worldwide ilustration
Chadchai Ra-ngubpai/Getty Images

AI regulation has been a hot topic with AI developments continuing to grow in popularity and quantity everyday. Government officials, tech leaders and concerned citizens have all been calling for action. 

Now a major player in the AI space, OpenAI, the company behind the wildly popular ChatGPT, joins the discussion. 

Also6 harmful ways ChatGPT can be used

In a blog post named, "Governance of superintelligence", OpenAI CEO Sam Altman, President and Co-founder Greg Brockman and Co-Founder and Chief Scientist at OpenAI Ilya Sutskever discuss the importance of establishing AI regulation now, before it is too late. 

"Given the possibility of existential risk, we can't just be reactive," said OpenAI leaders in the blog post. "Nuclear energy is a commonly used historical example of a technology with this property; synthetic biology is another example."

The blog post outlines three different course of actions that could serve as a good starting point for AI regulation. 

First, the post calls for there to be some form of a coordinating entity that focuses on the safety and smooth integration of AI technologies into society. 

AlsoMost Americans think AI threatens humanity, according to a poll

Examples could include having major governments worldwide collaborate on a project that current efforts could become a part of, or establishing an organization focused on AI development that restricts the annual growth rate of AI capabilities, according to the post. 

The second idea OpenAI shared was the need for an international organization like the International Atomic Energy Agency for AI or "superintelligence" efforts. 

Such an organization would have the authority to inspect systems, require audits, test for compliance with safety standards and more, ensuring safe and responsible development of AI models.

Also: Google's Bard AI says urgent action should be taken to limit Google's power

As a first step, the posts suggests companies could start implementing regulations that an international agency would and countries could also begin implementing those standards. 

Lastly, the post says a technical capability to make superintelligence safe is needed, but it is an open research question many people are putting effort into. 

OpenAI also says it is important to let companies and open-source projects develop AI models below an established capability threshold, without "burdensome mechanisms like licenses or audits," according to the post. 

This post comes a week after Altman testified at a Senate Judiciary Committee hearing to address risks and the future of AI.

Editorial standards