X
Innovation

Intel shows off latest 'Gaudi' AI chip, pitched towards enterprises

The chip is almost twice as fast at training large language models versus Nvidia's H100, says Intel, and fifty percent faster on inference.
Written by Tiernan Ray, Senior Contributing Writer
intel-2024-gelsinger-unveils-gaudi-3.png

Intel CEO Pat Gelsinger focused his sales pitch for Gaudi 3 on enterprise customers, telling them a "third phase" of AI will mean automating complex enterprise tasks.

Intel

Chip giant Intel on Tuesday unveiled its latest chip dedicated to artificial intelligence processing, "Gaudi 3," hot on the heels of arch-rival Nvidia unveiling its Blackwell GPU two weeks prior. 

Unveiled onstage by CEO Pat Gelsinger, during a live-streamed keynote at the company's customer and partner conference, Intel Vision 2024, in Phoenix, Arizona, the focus was placed on Gaudi 3's appeal to enterprises, with an emphasis on goals such as automating enterprise tasks.

Also: Nvidia CEO Jensen Huang unveils next-gen 'Blackwell' chip family at GTC

Gaudi 3 is the third generation of Intel's dedicated chip for performing artificial intelligence training and inference. Intel acquired the chip family when it bought venture-backed startup Habana Labs of Tel Aviv in 2019 for $2 billion.

The Gaudi 3 is nearly twice as fast as Nvidia's mainstream GPU, the H100 chip, when training AI models such as the TensorRT large language model, said Gelsinger.

(An "AI model" is the part of an AI program that contains numerous neural net parameters and activation functions, which are the key elements for how the AI program functions.)

Guadi 3 is 50% faster than H100 when performing inference, where a trained neural net makes predictions in response to real questions.

The Gaudi chip family has shown proficiency in recent benchmark tests going up against Nvidia. In the most recent round of the the MLPerf competition held by the MLCommons, an industry consortium, the existing Gaudi 2 chip was the only data center chip that competed with H100 to make predictions using Meta's open-source Llama 2 70-billion large language model.

Also: AI startup Cerebras unveils the WSE-3, the largest chip yet for generative AI

"Best of all, huge cost of ownership advantages for your organization," said Gelsinger. 

Gelsinger was joined onstage by guests including Michael Dell, IBM vice president of product Edward Calvesbert, Databricks head of generative AI Naveen Rao (a former Intel executive), and executives of Naver, which brands itself South Korea's largest Internet company.

intel-gaudi-3-2

Gaudi 3 is the third generation of Intel's dedicated chip for performing artificial intelligence training and inference.

Intel

"What we need is more Gaudi 3 in volume," said Michael Dell, whereupon he was presented with a metal briefcase. Dell opened the briefcase and an eery blue glow emerged, evoking a scene from the Quentin Tarantino film Pulp Fiction, and expressed "Wow."  

The Gaudi 3 consists of 64 separate tensor cores on die, to accelerate matrix multiplications at the heart of AI processing, aided by eight discrete "matrix math engines." The part draws upon 96 megabytes of fast on-chip SRAM cache memory, and a further 128 gigabytes of external "HBM3e" memory, the fastest industry DRAM, composed of multiple memory-chip die stacked next to the processor.  

The Guadi 3 can achieve 1.84 teraFLOPs, a trillion floating-point operations per second, when working on 8-bit floating point math operations, the widely used measure of chip performance. 

intel-gaudi-3-5
Intel

Not surprising, given the customer focus of the event, Gelsinger in his talk, with the help of the onstage guests such as Dell, aimed his pitch squarely at enterprise tasks, where he said AI programs are moving from experimental stages to implementation.

Gelsinger said Gaudi 3 will help to move AI through three different stages. The first, the present stage, is the "Age of AI Co-pilots," said Gelsinger. "The second age is nigh upon us, the age of AI agents."

Also: Making GenAI more efficient with a new kind of chip

The third stage would be the "Age of AI Functions," when the technology is implemented to "automate complex, enterprise-wide outcomes."

As the technology progresses to the third stage, said Gelsinger, the automation of functions, and resultant efficiency, meant that "Maybe we'll have the first one-person, billion-dollar company," by achieving things such as "unlocking all the data" stranded in organizations. 

"Intel, we were made for moments like this; together, with all of you, we're going to change the world again."

Editorial standards