OpenAI's ChatGPT has accumulated over 100 million users globally, highlighting both the positive use cases for AI and the need for more regulation. OpenAI is now putting together a team to help build safer and more robust models.
On Tuesday, OpenAI announced that it is launching its OpenAI Red Teaming Network composed of experts who can help provide insight to inform the company's risk assessment and mitigation strategies to deploy safer models.
This network will transform how OpenAI conducts its risk assessments into a more formal process involving various stages of the model and product development cycle, as opposed to "one-off engagements and selection processes before major model deployments," according to OpenAI.
OpenAI is seeking experts of all different backgrounds to make up the team, including domain expertise in education, economics, law, languages, political science, and psychology, to name only a few.
But OpenAI says prior experience with AI systems or language models is not required.
The members will be compensated for their time and subject to non-disclosure agreements (NDAs). Since they won't be involved with every new model or project, being on the red team could be as minor as a five-hour-a-year time commitment. You can apply to be a part of the network through OpenAI's site.
In addition to OpenAI's red teaming campaigns, the experts can engage with each other on general "red teaming practices and findings," according to the blog post.
"This network offers a unique opportunity to shape the development of safer AI technologies and policies, and the impact AI can have on the way we live, work, and interact," says OpenAI.
Red teaming is an essential process for testing the effectiveness and ensuring the safety of newer technology. Other tech giants, including Google and Microsoft, have dedicated red teams for their AI models.