Nvidia unveils Orin, its next-gen SOC for autonomous vehicles and robots

Nvidia also said it will give the transportation industry access to its Nvidia Drive deep neural networks for autonomous vehicle development.
Written by Stephanie Condon, Senior Writer

Nvidia on Tuesday unveiled Orin, a new system-on-a-chip (SoC) designed for autonomous vehicles and robots, as well as a new software-defined platform powered by the SoC, called Nvidia Drive AGX Orin.

Orin comes about two years after Nvidia debuted Xavier, its previous SoC for autonomous driving and robots. Consisting of 17 billion transistors, the new SoC offers nearly 7x the performance of Xavier, the company says. It integrates Nvidia's GPU architecture and Arm Hercules CPU cores, as well as new deep learning and computer vision accelerators, delivering 200 trillion operations per second. 

Anyone developing on Xavier today will be able to run their code on Orin, Nvidia said. Both are programmable through open CUDA and TensorRT APIs and libraries. 

The new SoC is the result of billions of dollars of R&D invested over four years. 

Meanwhile, Nvidia Drive AGX Orin is built to help OEMs to develop complex families of software products for autonomous vehicles, ranging from Level 2 to fully-autonomous Level 5 vehicles. 

"This is a software-designed platform designed to run all the different neural networks," Danny Shapiro, Nvidia's senior director of automotive, said to reporters, "dozens of different neural nets, pre-processing and post-processing of data, and a wide variety of different algorithms that all have to operate simultaneously in the car, providing redundancy and diversity to achieve the highest level of safety at the system level."

In addition to announcing the new SoC and accompanying platform, Nvidia announced it's opening up access to its pre-trained deep neural network models for autonomous driving. The models include a range of object detection models (such as pedestrian and bike detection, lane detection, free parking space detection, sign detection), gaze detection, gesture recognition, as well as path planning and mapping models. 

"We've invested many, many years in developing this technology, and now we're making it available to our ecosystem," Shapiro said. "The reason we're doing this is we want to help the industry move forward."

Nvidia is also making available tools that will allow others in the transportation industry to customize the neural networks using their own datasets and target feature set. The tools use active learning, federated learning and transfer learning. 

Meanwhile, Nvidia also announced that the mobile transportation platform provider Didi Chuxing will use Nvidia GPUs and AI technology to develop autonomous driving and cloud computing solutions. 

The company plans to use GPUs in the data center to train machine learning algorithms, as well as the Nvidia Drive platform for inference on Level 4 autonomous vehicles. 

DiDi will also be deploying Nvidia AI in the DiDi Cloud, using Nvidia technology for internal use cases, such as traffic and monitoring applications. The company is also launching virtual GPU cloud servers for cloud rendering, computing and gaming. 

Prior and related coverage:

Editorial standards