OpenAI is releasing its first commercial product, the company announced Thursday, giving businesses access to its most advanced general-purpose AI models via an API. The API, launching in private beta, is currently in use by customers for a range of applications including semantic search, sentiment analysis and content moderation.
While most AI models are designed for specific use cases, OpenAI's API provides a general-purpose "text in, text out" interface that could be applied to a wide range of English language tasks.
The API runs models with weights from the GPT-3 family, OpenAI's family of massive neural networks. The recently released GPT-3 uses 175 billion parameters, enabling it to achieve "meta-learning" -- meaning the GPT neural net is not re-trained to perform a task such as sentence completion.
If you give the new API a text prompt, it will attempt to return a text completion that matches the pattern it was given. Users can hone its performance on specific tasks by training it on a small or large dataset of examples, or through human feedback provided by users or labelers.
Early customers include the online learning platform Quizlet, which is using the API to automatically generate examples of how vocabulary words can be used in a sentence. Reddit is exploring content moderation with the API, while the legal research platform Casetext is aiming to improve its semantic search capabilities with it. The cloud communications platform MessageBird is using the API to develop automated spelling and grammar tools, as well as predictive text capabilities.
OpenAI was founded in 2015 by former Y Combinator president Sam Altman and Tesla CEO Elon Musk. The research and deployment company is focused on "artificial general intelligence," which it defines as "highly autonomous systems that outperform humans at most economically valuable work."
The company is launching its API in private beta in part because of the risks that come with launching a multi-purpose AI tool.
"We will terminate API access for obviously harmful use-cases, such as harassment, spam, radicalization, or astroturfing," the OpenAI blog post said. "But we also know we can't anticipate all of the possible consequences of this technology."
In addition to limiting its availability, OpenAI said it is building tools to help users better control the content the API returns, and it's researching safety-relevant aspects of language technology, like analyzing, mitigating and intervening on harmful bias.