Virtual Storage -- is it there yet?

With corporate data increasing exponentially, hard disk sizes swelling to match and networking becoming ever more capable, the mechanics of storage have never been more important

By some estimates, for every pound spent on storage hardware seven or eight are spent on managing it thereafter: the biggest problem is not buying the boxes but what you do with them afterwards.

Virtual storage is an idea that various companies are very keen we adopt. The thinking behind it is simple: in the same way that virtual memory seamlessly merges hard disk and RAM to create a memory space with the size of one and the speed of the other, virtual storage merges many different disks (and, potentially, other devices) to make them appear like one big homogenous area.

The benefits offered are clear. Management is thus much simplified: scaling and other gross changes to the physical storage configuration can be made without applications being aware, tuning and optimisation can be carried out in one place, and there are -- of course -- promised cost advantages.

There are three main virtual storage architectures -- in-band, out-of-band, and host-based. In-band has the virtual storage management box sitting between the applications and the storage devices -- it processes all transactions, acting much like a network router. Out-of-band doesn't mediate every transaction, but sends information about where the transactions are to go to the applications: the analogy here is a DNS server. Host-based systems have virtualisation software running on each storage host.

Each architecture has trade-offs. In-band storage moves everything through one point, and is thus in danger of becoming a bottle-neck -- there goes the scalability. It also increases latency. Out-of-band systems don't impede the flow of data between application and disk, but you can't mix and match options from different companies. Host-based systems are also free from bottlenecks, but as each storage server needs additional software there is considerable extra management required.

What all architectures have in common is that nobody's buying them. Ask vendors why, and they say things like "the storage-virtualization concept is gaining mindshare": ask users why and they quote unproven benefits, potential pitfalls and a lack of standards across the industry that makes the thousand dialects of the Amazonian tribes seem like a Janet and John book. Lack of interoperability is a guaranteed kiss of death to an idea whose benefits are not incontestable. There are open standards under development, such as the Internet Backbone Protocol, but these are still very young and not yet usable.

The industry has belatedly realised this. Following a number of initiatives, such as the Hitachi-led TrueNorth consortium, it turns out that nearly everyone's been working behind the scenes on Bluefin. In gestation for 18 months and recently announced, Bluefin takes some existing standards such as CIM -- the Common Information Model, which describes the management requirements and capabilities of systems -- and the Web-Based Enterprise Management, WBEM, and specifies how they should be used.

This includes ways to discover CIM managers on networks, how to take control of subsystems co-operatively with other management systems, and so on. By using the ubiquitous mixture of XML and SOAP, Bluefin aims to make storage management as open a field as other areas of IT have become through SNMP and its progeny: to that end, EMC, IBM, Hitachi, Brocade, Veritas, Hewlett Packard, Sun, QLogic, Dell, Emulex and StorageTek among others have joined in. The one name missing is Microsoft, but that company's still not taken seriously in the world of multi-terabyte databases.

The Bluefin specification itself has not been made public, but will later this summer. Sun and HP have said they expect to ship Bluefin product this year, Sun bullishly promising stuff by third quarter and HP, less stridently, saying it expects to have some support by the end of the year but it prefers to wait and see what the final Bluefin specification actually is.

Bluefin is now being run by the Storage Networking Industry Association (SNIA), which may either finish the standardisation process itself or hand it on to the Desktop Management Task Force (DTMF) -- which managed CIM.

If you do want to consider storage virtualisation, now is a very good time to wait and see. If you really must, ask hard questions first. Will the system merge with your existing management tools? How scaleable is it? How does it affect backup and restore strategies -- how long does it take to take an image and rebuild it? Is it fault-tolerant? Can you add storage from other manufacturers? Has anyone else installed such a system to manage comparable amounts of data, and what's their telephone number? Only if you get good answers in all of the above areas -- any of which can ruin your weekend -- should you start to get onto costs and benefits.

But chances are that by this time next year the world of storage management will look very different. If Bluefin works, virtual storage will finally have made it into the world of twenty-first century, where systems are open and benefits plain.


Have your say instantly in the Tech Update forum.

Find out what's where in the new Tech Update with our Guided Tour.

Let the editors know what you think in the Mailroom.