Nvidia CEO Jen-Hsun Huang apparently hand-delivered a DGX-1 to OpenAI researchers last week in San Francisco, Calif. Dubbed an "AI supercomputer-in-a-box", the DGX-1 will be used by the non-profit research team to explore the challenges surrounding artificial intelligence.
The idea is to find ways OpenAI can use the supercomputer as it works on projects like artificial personal assistants, autonomous cars, and robots for the everyman.
Debuted earlier this year, Nvidia describes the DGX-1 system as the first deep learning supercomputer that's built for artificial intelligence. The supercomputer is meant to be turnkey and match the computing power of 250 x86 servers.
Nvidia claims that the DGX-1 will enable researchers and data scientists to better use GPU. The DGX-1 includes Nvidia's GPU training system, deep learning software, as well as libraries to design neural networks.
As for OpenAI, the Elon Musk and Peter Thiel-backed non-profit research company was founded last year on a mission of advancing artificial intelligence in ways that will broadly benefit humanity. But its researchers say they have been limited by the computational power of their hardware and that in order for AI to truly advance, GPUs need to be really, really fast.
"The DGX-1 is a huge advance," said OpenAI Research Director Ilya Sutskever. "It will allow us to explore problems that were completely unexplored before, and it will allow us to achieve levels of performance that weren't achievable."
One of the applications OpenAI's researchers have in mind for the DGX-1 is a process called "generative modeling", which uses data to generate appropriate responses in machines to make them behave more intelligently.
"You can take a large amount of data that would help people talk to each other on the internet, and you can train, basically, a chatbot, but you can do it in a way that the computer learns how language works and how people interact," said OpenAI Research Scientist Andrej Karpathy.