X
Tech

Nvidia takes aim at Tesla's custom GPU claims

Elon Musk is a master at promotion, but Nvidia is laying out a few "inaccuracies" with Tesla's GPU comparisons.
Written by Larry Dignan, Contributor

Tesla CEO Elon Musk touted the company's custom GPU processor, said it is using its own silicon over Nvidia and has all the hardware in place for autonomous driving. A day later, Nvidia set out to diplomatically correct some Tesla "inaccuracies."

Anyone who has followed the semiconductor market knows there are benchmark fibs all the time. In this case, Nvidia is drawing attention to what it considers an apple-oranges GPU comparison.

On Monday, Tesla held an investor presentation to cover its autonomous driving plans and noted that the company has replaced Nvidia. Musk said:

How could it be that Tesla, who has never designed a chip before, would design the best chip in the world? But that is objectively what has occurred, not best by a small margin, best by a huge margin. It's in the cars right now. All Teslas being produced right now have this computer. We switched over from the NVIDIA solution for S and X about 1 month ago. And we first switched over Model 3 about 10 days ago. All cars being produced have all the hardware necessary, compute and otherwise, for full self-driving. I'll say that again, all Tesla cars being produced right now have everything necessary for full self-driving. All you need to do is improve the software.

Peter Bannon, Tesla's director of silicon engineering, had far more detail about the custom GPU. He noted:

So here's the design that we finished. You can see that it's dominated by the 32 megabytes of SRAM. There's big banks on the left and right and the center bottom, and then all the computing is done in the upper middle. Every single clock, we read 256 bytes of activation data out of the SRAM array, 128 bytes of weight data out of the SRAM array, and we combine it in a 96 by 96 small add array, which performs 9,000 multiply/adds per clock. At 2 gigahertz, that's a total of 3.6 -- 36.8 TeraOPS.

He continued:

We had a goal to stay under 100 watts. This is measured data from cars driving around running a full autopilot stack. We're dissipating 72 watts, which is a little bit more power than the previous design, but with the dramatic improvement in performance, it's still a pretty good answer. Of that 72 watts, about 15 watts is being consumed running the neural networks.

In terms of costs, the silicon cost of this solution is about 80% of what we were paying before. So we are saving money by switching to this solution. And in terms of performance, we took the narrow camera neural network, which I've been talking about that has 35 billion operations in it, we ran it on the old hardware in a loop as quick as possible and we delivered 110 frames per second. And we took the same data, the same network, compiled it for hardware for the new FSD computer, and using all 4 accelerators, we can get 2,300 frames per second processed, so a factor of 21.

Nvidia didn't exactly disagree with Tesla's take that the autonomous car of the future will have a supercomputer on board. However, Nvidia said Tesla is comparing its GPU incorrectly. Nvidia also added that its next-gen processor Orin is on deck.

The GPU giant said:

It's not useful to compare the performance of Tesla's two-chip Full Self Driving computer against NVIDIA's single-chip driver assistance system. Tesla's two-chip FSD computer at 144 TOPs would compare against the NVIDIA DRIVE AGX Pegasus computer which runs at 320 TOPS for AI perception, localization and path planning.

Additionally, while Xavier delivers 30 TOPS of processing, Tesla erroneously stated that it delivers 21 TOPS. Moreover, a system with a single Xavier processor is designed for assisted driving AutoPilot features, not full self-driving. Self-driving, as Tesla asserts, requires a good deal more compute. 

Related stories:

Editorial standards