Google and Microsoft partner with OpenAI to form AI safety watchdog group
Called the Frontier Model Forum, the partnership comes after a White House meeting last week with the same companies, plus Amazon and Meta, to discuss the secure and transparent development of future AI.
Some of the biggest names in technology are coming together to help keep AI safe. Tech giants Google and Microsoft are partnering with OpenAI and Anthropic to ensure that AI technology progresses in a "safe and responsible" fashion.
This partnership comes after a White House meeting last week with the same companies, plus Amazon and Meta, to discuss the secure and transparent development of future AI.
The watchdog group, called the Frontier Model Forum, noted that while some governments around the world are taking measures to keep AI safe, those measures aren't enough. Instead, it's up to technology companies.
In a blog post on Google's site, the forum listed four main objectives:
Improving AI safety research to minimize future risks
Identifying best practices for responsible development and deployment of future models
Working with policymakers, academics, and others to share knowledge about AI safety risks
Supporting the development of AI uses in solving humanity's greatest problems like climate change and cancer detection
An OpenAI representative added that while the forum would work with policymakers, they wouldn't get involved with government lobbying.
Membership in the forum is open to any company that meets three criteria: developing and deploying frontier models, demonstrating a strong commitment to frontier model safety, and a willingness to participate in joint initiatives with other members. What's a "frontier model?" The forum defines that as "any large-scale machine-learning models that go beyond current capabilities and have a vast range of abilities."
While AI has world-changing potential, the forum says, "appropriate guardrails are required to mitigate risks." Anna Makanju, vice president of global affairs at OpenAI, issued a statement: "It's vital that AI companies -- especially those working on the most powerful models -- align on common ground. This is urgent work. And this forum is well-positioned to act quickly to advance the state of AI safety."
Microsoft President Brad Smith echoed those sentiments, saying that it's ultimately up to companies creating AI technology to keep them safe and under human control.