The limits of RAID: Availability vs durability in archives

Availability and durability play very different roles in data preservation. Availability refers to system uptime, while durability refers to long-term data protection. Poor durability makes RAID arrays a poor choice for archives. Here's what you need to know.
Written by Robin Harris, Contributor

There's a strong tendency in storage to go for the one-size-fits-all approach. Nobody wants to manage two systems when one will do. But you need to understand exactly what your use case requires - and ensure the critical success factors are well implemented.

That's the case with RAID arrays and archiving. Until about 5 years ago, you had two choices for enterprise archives: tape silos or RAID arrays. Tape silos are still a low-cost way to archive massive amounts of data, but many enterprises look at RAID arrays - especially fully depreciated ones - and think they can save money by re-purposing them.

But modern disk-based archives offer the convenience of rapid retrieval , and exceptional data durability, at an affordable entry price. Since preserving data is an archive's purpose, understanding the difference between availability and durability is crucial.


Storage system availability is achieved through hardware redundancy, while durability is achieved by data redundancy. Let's look at each in turn.

RAID arrays typically support two or more of each major hardware component: controllers; I/O paths; drives; and data redundancy that will survive one (RAID 5) or two (RAID 6) drive failures. While RAID arrays are designed to achieve six nines (99.9999) availability, they often fail to do so - and that puts your data at risk.

Why RAID arrays fall short

The basic problem with RAID is that, unbeknownst to pioneer RAID designers, a key design assumption wasn't correct. That assumption - that drive failures are independent events - meant that what vendors promoted, and users saw, were very different.

It turned out that when one drive fails, another drive failure is much more likely. All the original engineering and marketing of RAID - I know, I was there - was based on the assumption that drive failures were uncorrelated, so we could apply the vendor's drive MTBF and get close-enough-to-infinite mean-times-to-data-loss. Good theory. Too bad it didn't work!

The modern approach

Today's durable archive systems, use high availability object-based storage (object stores) to achieve much higher - up to 99.9999999999999 percent (or more!) to ensure data preservation. That means that if you stored 1,000 trillion objects, only one would not be readable.

How does a durable archive achieve this remarkable feat? How the data is encoded and laid out on the storage.

Comparing data layouts of RAID arrays and durable archives

Parity data is what allows RAID arrays and object storage to reconstruct lost data. But RAID arrays and durable archives use parity very differently.

RAID systems lay out data and parity across a fixed bank of disks, typically on a single shelf. In a simple RAID 6 layout there might be 4 data disks and 2 parity disks, protecting against only two drive failures.

Durable archive systems also use parity to protect data, but use algorithms that did not exist when RAID was proposed in the late 1980s. These modern, advanced erasure codes - often called rateless or fountain codes - offer higher redundancy and greater efficiency, perfect for economical, long-term data protection.

Data and parity are built into objects, unlike RAID volumes. Objects are broken into shards and distributed across the available storage. If that storage is geographically distributed, the archive can survive the loss of entire data center.

No RAID array offers the same level of data protection. In addition, a single namespace object store can be spread across a single server, rack, datacenter, or across multiple geographies.

The Storage Bits take

Tape silos are no longer the automatic answer for archives, though they still have their place. Modern archive systems based upon object storage, using technologies pioneered by hyper-scale data centers, enable enterprises to archive their data at costs below public clouds, and with greater security than public clouds can offer.

Data center planners and architects need to appreciate the limitations of RAID arrays for archive use. The rate of change in storage technologies is accelerating, and IT professionals are wise to adapt to remain competitive against out-sourced options.

Courteous comments welcome, of course.

Big data deluge? Smart storage to the rescue

Editorial standards