OpenAI is hiring hundreds of contractors from different parts of the globe to help ChatGPT get better at coding, according to a report from Semafor.
AI chatbots and writers can help lighten your workload by writing emails and essays and even doing math. They use artificial intelligence to generate text or answer queries based on user input. ChatGPT is one popular example, but there are other noteworthy chatbots.Read now
More specifically, the company is hiring computer programmers to create training data that will include lines of code, as well as explanations of the code written in natural language.
Also: The best AI art generators: DALL-E 2 and alternatives
OpenAI already has a ChatGPT model designed for translating natural language into code, called Codex. Before it was launched in 2021, Codex was trained on data scraped from GitHub, the code repository owned by Microsoft.
Codex is good enough that Microsoft uses it to power GitHub Copilot, a service that helps programmers write code.
Also: What is ChatGPT and why does it matter? Here's everything you need to know
Recent research, however, suggests coding assistance could be improved by adding ChatGPT's unique capacity for dialogue with humans. It's no surprise, then, that OpenAI is investing in this area by hiring more contractors. In the meantime, Stack Overflow, the Q&A site for programmers, has banned ChatGPT-generated answers because even its low-quality answers can be plausible-sounding.
In order to truly advance a coding assistant, according to Yann LeCun, chief AI scientist for Meta, researchers would need to build "a system that is capable of anticipating the effect of its own actions," as well as having "some sort of internal world model, a mental model of how the world is going to change as a consequence of its own actions."
Also: How to get started using ChatGPT
LeCun recently likened coding assistants such as Copilot to cruise control in cars. "Your hands need to remain on the wheel at all times," because Copilot can generate errors in code with no awareness of the error.
"The question is, how do we get from systems that generate code that sometimes runs but sometimes doesn't," said LeCun. "And the answer to this is all of those systems today are not capable of planning; they are completely reactive."