X
Innovation

OpenAI is building a red teaming network to tackle AI safety - and you can apply

OpenAI's Red Teaming Network members will be compensated for their time and prior experience with language models is not required.
Written by Sabrina Ortiz, Editor
OpenAI Red Teaming
OpenAI

OpenAI's ChatGPT has accumulated over 100 million users globally, highlighting both the positive use cases for AI and the need for more regulation. OpenAI is now putting together a team to help build safer and more robust models. 

On Tuesday, OpenAI announced that it is launching its OpenAI Red Teaming Network composed of experts who can help provide insight to inform the company's risk assessment and mitigation strategies to deploy safer models. 

Also: Every Amazon AI announcement today you'll want to know about 

This network will transform how OpenAI conducts its risk assessments into a more formal process involving various stages of the model and product development cycle, as opposed to "one-off engagements and selection processes before major model deployments," according to OpenAI. 

OpenAI is seeking experts of all different backgrounds to make up the team, including domain expertise in education, economics, law, languages, political science, and psychology, to name only a few. 

Also: How to use ChatGPT to do research for papers, presentations, studies, and more

But OpenAI says prior experience with AI systems or language models is not required. 

The members will be compensated for their time and subject to non-disclosure agreements (NDAs). Since they won't be involved with every new model or project, being on the red team could be as minor as a five-hour-a-year time commitment. You can apply to be a part of the network through OpenAI's site

In addition to OpenAI's red teaming campaigns, the experts can engage with each other on general "red teaming practices and findings," according to the blog post. 

Also: Amazon is turning Alexa into a hands-free ChatGPT

"This network offers a unique opportunity to shape the development of safer AI technologies and policies, and the impact AI can have on the way we live, work, and interact," says OpenAI. 

Red teaming is an essential process for testing the effectiveness and ensuring the safety of newer technology. Other tech giants, including Google and Microsoft, have dedicated red teams for their AI models.

Editorial standards