Disturbing the equilibrium
Disruption in enterprise storage isn't just possible — it's essential, if we're to keep up with projected growth. Here are three disruptions that will drive the progress of enterprise storage.
The switch to solid-state
It's already cost effective for an increasing range of high-performance tasks to use a bank of SSDs instead of conventional rotating hard disks, even given the SSD's disadvantages of higher cost and shorter lifetime. But that's just the first phase: the disruptive effect of solid-state storage increases massively as we change the underlying storage architecture to make the best use of its potential.
Companies such as Fusion-io are making SSDs that look much more like fast memory tightly coupled to processors than ordinary disks. In effect, they're collapsing I/O down to the minimum number of layers running at the maximum speed. The marketing departments are pushing this as Tier 0. But it's early days, and sales have been held back because of worries over error rates and increased maintenance.
Yet flash technology improves by the year, as does that of the controllers that compensate for its drawbacks.
There are also some notable surprise runners: HP's memristor, for example, has the potential to leapfrog flash in performance and in reliability, as well as density and power utilisation.
Storage is no good if you can't get data in and out quickly enough, which is why most of the discussion around enterprise storage architecture concerns the networking fabric from which it's woven. For speed, you can't beat optical fibre. For cost, you can't beat copper.
Silicon photonics or silicon nano-optics combines the best bits of the two technologies. Demonstrated by Intel and announced by IBM, this takes all the expensive, cumbersome and inflexible aspects of optical fibre — the stuff that translates between light and electricity — and places them on-chip.
Lasers, switches, modulators, multiplexors, detectors — all of these components can be made of silicon. Once there, the magic of Moore's Law makes them cheap, simple and scalable. Both IBM and Intel have talked about terabits a second being achievable, and of course this would be very closely coupled with the other circuitry on the chip — memory, CPU, controllers.
Enterprise storage should be the first area to benefit from this, as it has the need and the margins to make economic use of the first generation of commercialised photonic products. Very cheap, very fast networking with the capability of traversing kilometres will be a game-changer — not just in terms of raw performance, but by enabling entirely new distributed topologies.
Search and security
How do you find data quickly among the petabytes, and shield it from unauthorised access? These two problems are related. At some point, the way in which filing systems and search algorithms interact will change fundamentally, as the storage system itself maintains more and more information about what it contain, and who should see it.
Ideas such as ZFS and Ceph are taking on the challenge of mapping a changing, scaling storage infrastructure to the needs of computation, while Google continues its ten-year experiment in creating the world's biggest, smartest database.
Security is lagging, but the model is evolving where each piece of data is bound with rules that describe how it can be distributed, and the infrastructure itself becoming permissive or impervious according to who requests what.
What the three disruptions will bring
Extremely fast local caching thanks to solid-state disks. Extremely fast long-distance networking from cheap on-chip photonics. And extremely smart data-centric fabrics that enforce strict security. With all this in place, we might just have an enterprise storage cloud that makes sense.