One of the eternal problems of systems management is that there's so much of it. As system components get more complex and capable, deploying and controlling them gets no simpler. Storage is particularly prone to this: you can never fit and forget a storage device: it will get full, need backing up and kept in a reliable condition. This problem gets more complicated with every different piece of storage attached to your system, and live data is particularly unforgiving. Storage virtualisation aims to fix this. It works by gathering together disparate storage devices and presenting them to a server as a single logical entity: one point of management, plus greater ease of attaching and detaching extra storage. With decent caching and bandwidth management, usually courtesy of a storage-area network (SAN) running on Fibre Channel, performance and reliability are also enhanced -- or so claim the vendors. This is nothing new -- RAID systems are simple virtual storage systems -- but while the idea has many advantages, the costs and disadvantages have led to many companies adopting a wait-and-see policy. There are three main virtual storage architectures -- in-band, out-of-band, and host-based. In-band has the virtual storage management box sitting between the applications and the storage devices -- it processes all transactions, acting much like a network router. Out-of-band doesn't mediate every transaction, but sends information about where the transactions are to go to the applications: the analogy here is a DNS server. Host-based systems have virtualisation software running on each storage host. Each architecture has trade-offs. In-band storage moves everything through one point, and is thus in danger of becoming a bottle-neck -- there goes the scalability. It also increases latency. Out-of-band systems don't impede the flow of data between application and disk, but you can't mix and match options from different companies. Host-based systems are also free from bottlenecks, but as each storage server needs additional software there is considerable extra management required. This negates the main benefits of storage virtualisation, simplified management -- in the ideal case, one management console covering all storage across the enterprise. In-band is proving the most popular, usually with the virtualisation managed by an Intel-based server running Windows or Linux. This must be powerful -- there's no point in having such a system if it's serving another server that's got a higher IO throughput -- and vendors and analysts alike warn that reliability has to be explicitly addressed. And while out-of-band virtualisation avoids this problem it does require software to be installed and managed on every server on the system. Such approaches tend to be proprietary, making them hard to integrate across multiple vendors' equipment and difficult to manage. This lack of standardisation has long hindered storage virtualisation That's changing. Following a number of initiatives, the Storage Networking Industry Association adopted a standard called Bluefin -- now officially known as the Storage Management Interface Specification, SMIS, as well as sometimes just SMI or SMI-S. It's an attempt to make storage devices and storage area network management consoles work together. SMIS takes some existing standards such as CIM -- the Common Information Model, which describes the management requirements and capabilities of systems -- and the Web-Based Enterprise Management, WBEM, and specifies how they should be used. This includes ways to discover CIM managers on networks, how to take control of subsystems co-operatively with other management systems, and so on. By using the ubiquitous mixture of XML and SOAP, SMIS aims to make storage management as open a field as other areas of IT have become through SNMP and its progeny: to that end, EMC, Brocade, IBM, Hitachi, Brocade, Veritas, Hewlett Packard, Sun, Dell, Intel and StorageTek among others have joined in. Microsoft is an associate member of SNIA, but at the time of writing was unable to say whether Server 2003 and associated storage aspects -- Windows Powered NAS (WPN), Virtual Disk Service (VDS) and Volume Shadow Copy Service (VSS) would be SMIS compliant. We suspect it will be, as Microsoft has formed strategic alliances with people such as HP, but the company's history on open standards is mixed. While storage virtualisation has improved over the past year, there are still many questions to ask of any product. Will the system merge with your existing management tools? How scaleable is it? How does it affect backup and restore strategies -- how long does it take to take an image and rebuild it? Is it fault-tolerant? Can you add storage from other manufacturers? Has anyone else installed such a system to manage comparable amounts of data, and what's their telephone number? Only if you get good answers in all of the above areas -- any of which can ruin your weekend -- should you start to get onto costs and benefits. Confidence in storage virtualisation will only come with a history of successful deployment in heterogeneous environments and plenty of stories of tangible gains. The technology is still more promise than delivery, but with continual and substantial development taking place it has every chance of being ready for action when the current stringencies fade out.