Nvidia announced on Monday that it's adding support for Arm's CPU architecture to its GPU platform. The chipmaker said the aim is to offer energy efficient supercomputing by marrying the fast execution of Arm CPUs with Nvidia-optimized processing power.
Nvidia has invested in Arm for ten years with a focus on embedded markets and the self-driving car space, and said CPU support now brings the rest of the accelerated computing stack to the Arm platform. Nvidia is also touting this latest partnership as a way to provide an open architecture for supercomputing.
Once stack optimization is complete, Nvidia said it will support all major CPU architectures, including x86, POWER and Arm.
"Supercomputers are the essential instruments of scientific discovery, and achieving exascale supercomputing will dramatically expand the frontier of human knowledge," said Nvidia CEO Jensen Huang. "As traditional compute scaling ends, power will limit all supercomputers. The combination of Nvidia's CUDA-accelerated computing and Arm's energy-efficient CPU architecture will give the HPC community a boost to exascale."
The companies expect to release the stack by the end of this year.
Nvidia also unveiled what it says is the world's 22nd fastest supercomputer, the DGX SuperPOD. The system, built with 96 NVIDIA DGX-2H supercomputers and Mellanox interconnect technology, is what Nvidia used to develop the brains for its autonomous vehicle platform. According to Nvidia, the DGX SuperPOD, with its ability to deliver 9.4 petaflops of processing capability, represents how modern AI should be trained at scale, not on a single server or GPU.
TechRepublic: The world's 25 fastest supercomputers
"AI leadership demands leadership in compute infrastructure," said Clement Farabet, VP of AI infrastructure at Nvidia. "Few AI challenges are as demanding as training autonomous vehicles, which requires retraining neural networks tens of thousands of times to meet extreme accuracy needs."