X
Tech

The end of RAID

Low latency storage, fast multi-core CPUs, high-bandwidth interconnects and larger disk capacity is ending the reign of costly RAID controllers in favor of more elegant data protection. A report from the front lines of storage innovation.
Written by Robin Harris, Contributor

Low latency storage, fast multi-core CPUs, high-bandwidth interconnects and larger disk capacity is ending the reign of costly RAID controllers in favor of more elegant data protection.

A report from the front lines of storage innovation at the OpenStorage Summit.

Storage is at an inflection point. Low latency mass storage - flash and DRAM, faster interconnects - including PCIe3, multi-core CPUs has broken some pieces of the old storage stack. This means new opportunities for a fundamental rethinking of storage architectures.

What has broken? At a high level today's storage stack latency is too high. When 4 ms was fast, who cared about another ms of latency? But with sub-millisecond flash and NVDRAM storage that is no longer acceptable.

At a lower level, the storage stack architecture is broken. Kernel level locking and context switching that were "fast" compared to disks are too slow today.

Just as CPUs are going multi-core to improve concurrency, so must storage. Instead of kernel-level locking, we need to go to application-level locking to maintain multiple lock queues.

And this affects RAID how? Back when RAID was taking off, 1 GB disks were the rule. Rebuilds didn't take very long. But now SATA drives are 1,000x, 2,000x and now even 3,000x larger.

As disks gets larger the time it takes to rebuild a failed disk gets longer too. Many hours at a minimum, often a day or more and sometimes even a week or more.

During rebuilds the system slows because every disk is seeking and the controller is rebuilding and writing the lost data. That hurts application availability.

RAID was designed to solve 2 problems: data availability using unreliable cheap disks; and, improved performance with slow - also cheap - disks. There was no flash, DRAM cost $25/MB and cheap SCSI drives cost a fraction of what an enterprise 9" drive cost.

So we need to simplify. We can use flash or NVDIMMs to quickly handle metadata requests. If we can afford it we can even move hot files to non-volatile storage on very low latency PCIe busses.

Which means that disks are storing our less active data. For what a RAID controller costs we can buy several terabytes of capacity and store 2 or more copies of everything. When a disk fails a copy is much faster than a rebuild.

For every pipe a queue Intel's Sandy Bridge I/O architecture promises 40 1GB/sec PCIe v.3 lanes per CPU. We can give every app its own I/O queue and fast PCIe storage if can afford it.

Imagine a 64-core chip running 45 apps with 40 64 GB PCIe storage cards. That's mainframe/supercomputer power in a commodity box.

The Storage Bits take Just as RAID was a smart play on 90's technology, we'll see new storage architectures taking advantage of today's options. While RAID won't disappear overnight, its days are numbered.

Storing data is cheap and getting cheaper. Moving data is costly in time and lost performance. New architectures will reduce movement by consuming capacity.

Latency reduction is a deeper problem. Replacing several decades of plumbing isn't simple, but it must be done. More on that later.

Comments welcome, of course. I owe a debt of gratitude to Richard Elling and Bill Moore, formerly of Sun, and their presentations at the Summit.

Editorial standards