Samsung develops high-bandwidth memory with integrated AI processing

The new architecture will double the performance of AI systems, Samsung claims.
Written by Cho Mu-Hyun, Contributing Writer

Samsung Electronics said on Wednesday it has developed a high-bandwidth memory (HBM) integrated with artificial intelligence processing power.

The new processing-in-memory (PIM) architecture adds AI engines into Samsung's HBM2 Aquabolt, which first launched back in 2018.

The chip, called HBM-PIM, doubles the performance of AI systems while reducing power consumption by over 70% compared to conventional HBM2, Samsung claimed.

The South Korean tech giant explained that this was possible as the installation of AI engines inside each memory bank maximises parallel processing while minimising data movement. By providing this improved performance, Samsung said it expects the new chip to accelerate large scale processing in data centres, high-performance computing systems, and AI-enabled mobile applications.

Samsung added that HBM-PIM uses the same HBM interface as older iterations, which means customers will not have to change any hardware and software to apply the chip into their existing systems.

The chip is currently being tested inside AI accelerators of customers that are scheduled to be completed within the first half of the year. Samsung is also working with customers to build an ecosystem and standardise the platform.

The company's paper on the chip will be presented at the virtual International Solid-State Circuits Conference next week.


Editorial standards