X
Innovation

IBM: Our in-memory computing breakthrough will cut cost of training AI

IBM's nanoscale analog computers could solve the big-data energy crunch facing traditional architectures.
Written by Liam Tung, Contributing Writer

Video: How good can hard drives get? IBM hits one bit per atom.

IBM Research says it's developed a new approach to in-memory computing that could give it an answer to the hardware accelerators for high-performance and machine-learning applications sought by Microsoft and Google.

IBM's researchers describe its new 'mixed-precision in-memory computing' approach in a paper published today in peer-reviewed journal Nature Electronics.

The company is eyeing a different take on traditional computing architectures in which software requires data transfers between separate CPU and RAM units.

According to IBM, that design, known as a von Neumann architecture, creates a bottleneck for data analytics and machine-learning applications that require ever-larger data transfers between processing and memory units. Transferring data is also an energy-intensive process.

See more: Special report: How to implement AI and machine learning (free PDF)

Part of IBM's answer to this challenge is an analog phase-change memory (PCM) chip, currently implemented in prototype as a 500-by-2,000 crossbar array consisting of one million nanoscale PCM devices.

A key advantage of the PCM unit is that it can handle most of the heavy-lifting data processing without needing to transfer data to a CPU or GPU, resulting in faster processing with a lower energy overhead.

IBM's PCM unit would serve as a CPU accelerator much like the field-programmable gate array (FPGA) chips that Microsoft is using to speed up Bing and boost its machine-learning chops.

According to IBM, its study demonstrates that under certain conditions its PCM chip can be operated in an analog manner to perform computing tasks, and offers comparable accuracy to a four-bit FPGA memory chip but with 80 times lower energy consumption.

The catch with analog PCM hardware is that it's not cut out for high-precision calculations. Fortunately, digital CPUs and GPUs are, and IBM thinks a hybrid architecture could strike a balance that delivers faster performance and greater efficiency as well as precision.

The design would leave most processing to memory and then hand off a lighter load to a CPU for a series of accuracy corrections.

According to Manuel Le Gallo, an electrical engineer at IBM's Zurich lab and lead author of the paper, the design holds promise for cognitive computing in the cloud and could also help free up access to high-performance computers that researchers today need to compete for.

"With the precision we have right now, we can demonstrate a reduction in energy consumption of a factor of six compared with running on high-precision GPUs and CPUs," Le Gallo told ZDNet.

"So the idea is that to cope with imprecision when trying to do analog computing, we combine it with a standard processor. What we're trying to do is dump the bulk of computational tasks into PCM devices, but at the same time get a final result that is precise," he added.

See: What is AI? Everything you need to know about Artificial Intelligence

The technique is better suited to some applications, such as digital image recognition, where misinterpreting a few pixels doesn't obstruct identification, as well as some healthcare applications.

"You can do the bulk of computations in low precision -- in analog, PCMs are very energy efficient -- and then use a traditional processor to improve accuracy."

It's still early days for IBM's prototype memory chip, which is only one megabyte in size. To be useful for modern data center-scale applications it will need to reach gigabytes of memory spread across trillions of PCM devices.

Still, IBM thinks it can achieve this goal by building larger arrays of PCM devices or having several of them operate in parallel.

Previous and related coverage

IBM launches open-source library for securing AI systems

The framework-agnostic software library contains attacks, defenses, and benchmarks for securing artificial intelligence systems.

World's smallest computer: IBM's fraud-fighter is so tiny it's almost invisible

IBM has big ambitions for its barely visible computer, including helping combat fraud with blockchain tech.

IBM's new launch: PAIRS Geoscope aims to be search for geospatial big data

IBM wants enterprises to use its PAIRS Geoscope datasets to develop better geospatial services.

Editorial standards