X
Tech

What would a memory-centric system look like?

Data growth is increasing much faster than network bandwidth, while storage is getting cheaper and networks aren't. Which means, someday, moving to memory-centric systems. What would they look like? Here's a vision from the former director of HP Labs and the new CTO of Western Digital.
Written by Robin Harris, Contributor
martiin-fink.jpg

Martin Fink's vision starts with petabyte-scale memory systems.

One of the keynotes at the recently concluded Non-Volatile Memory Workshop 2017 was given by Martin Fink, a 31-year HP veteran and newly appointed CTO of Western Digital. Martin ran HP Labs, and HP's high-end server business, and has been thinking about memory-centric computing for years. If the terms "memristor" and "polymorphic computing" ring a bell, you can thank HP Labs for injecting them into the technosphere.

Scale first

Today, memory is anchored to server CPUs, and in-memory computing is all the rage. With new NVDIMMs and lots of DIMM slots, servers supporting 2TB of memory are now possible. Does that seem like plenty?

Martin's vision starts with petabyte-scale memory systems. DRAM would be too costly and an energy hog, so the vision expects one of the NVRAM technologies, such as ReRAM (resistance RAM), to be feasible.

Memory tech

Martin expects that multiple NVRAM technologies will be successful in the market, based on differing characteristics such as cost, durability, performance, and application. That will be a huge change from the current commodity DRAM market, allowing architects to tweak systems in ways not possible today. Just look at what far-from-perfect-but-much-cheaper NAND flash has done to storage in the last decade.

One major difference from flash is that the new memory cannot require an erase cycle, as NAND flash now does. NAND flash erases are costly since each block has to be zeroed out before it can be rewritten.

The petabyte-scale memory would have CPUs attached to it as needed, much as we now spin up thousands of CPUs in the cloud to run compute-intensive apps. The advent of the edge-centric mobile computing is a useful model, but the interconnect will have to be much faster and cheaper than wireless.

Architecture

Martin envisions that the memory pool will boot independently of the CPUs. You'll have a memory fabric that is its own system, with racks of CPUs that will be switched on and attached as needed.

The CPUs themselves may be general purpose x86 CPUS, but its likely that specialized CPUs will find favor for large-scale applications. Why not have a rack of GPUs or specialized RISC processors for applications that can use them?

The memory fabric

Memory syntax is basically load-store (with exceptions for CISC CPUs) and byte addressable, which means that memory updates can be highly granular. That reduces bandwidth requirements and eliminates much of the overhead common in block-based storage, such read-erase-write cycles on RAID systems.

Therefore, the memory interconnect, or fabric, won't have to accommodate a large number of corner cases or complex operations. That will make the interconnect switches much simpler, more reliable, faster, and cheaper than the network interconnects used by scale-out systems today.

The Storage Bits take

Memory-centric architectures are a coming thing, even if the timing is uncertain. As our advanced economies digitize more operations, the rate of data growth and the size of data sets is exceeding the ability of LANS and storage interconnects to move the data to CPUs. Therefore, CPUs have to move to the data.

But will the required technology will be available? I'm confident it will be. Today's hyper scale IT infrastructures makes it possible for multiple technologies to reach the scale required to make them economically viable. Different NVRAMs, for example, have multiple paths to volume: mobile; embedded; device, servers, and HPC. And the growth of Big Data applications will continue to fuel demand.

The coming decade in computing is the most exciting I've seen in the last 40 years. It'll be fun watching it unfold.

Courteous comments welcome, of course. WD is an advertiser on my blog, StorageMojo.

How good can hard drives get? IBM hits one bit per atom

Editorial standards