X
Innovation

Intel acquires Habana Labs for $2 billion

Habana Labs is known as a developer of programmable deep learning accelerators for the data center.
Written by Natalie Gagliordi, Contributor

Intel on Monday announced that it has acquired Israel-based programmable chipmaker Habana Labs for approximately $2 billion. Habana Labs is known as a developer of programmable deep learning accelerators for the data center, which Intel said will bolster its AI portfolio and ramp up its focus on the AI silicon market.

"This acquisition advances our AI strategy, which is to provide customers with solutions to fit every performance need – from the intelligent edge to the data center," said Navin Shenoy, EVP and GM of Intel's Data Platforms Group. 

"More specifically, Habana turbo-charges our AI offerings for the data center with a high-performance training processor family and a standards-based programming environment to address evolving AI workloads."

Habana's core products target workloads in AI and machine learning: the Gaudi AI Training Processor and the Goya AI Inference Processor. The Gaudi system is currently being sampled by select hyperscale customers and is expected to deliver up to a 4x increase in throughput in large node training systems. The Goya processor is commercially available and is said to boost inference performance including throughput and real-time latency.

Also: Intel unveils next-gen Movidius VPU, codenamed Keem Bay

Intel has focused significant resources on expanding its AI-specific hardware portfolio, as well as on developing AI for cloud, data center and edge computing use cases. AI is already bringing in $3.5 billion in yearly revenue for the company. Intel posits that its existing AI capabilities will help Habana scale and accelerate development of its GPUs. 

Going forward, Intel said Habana will continue operating independently under its current management team, reporting to Intel's data center group. 

"Our combined IP and expertise will deliver unmatched computing performance and efficiency for AI workloads in the data center," Shenoy said.

RELATED:

Editorial standards