X
Tech

Making servers virtually better

Advances in virtualisation technology mean servers no longer need to be tied to a particular piece of hardware - they can be backed up or moved from one machine to another in the same way a file is copied from disk to disk
Written by Rupert Goodwins, Contributor
Making servers virtually better
Rupert Goodwins
Advances in virtualisation technology mean servers no longer need to be tied to a particular piece of hardware -- they can be backed up or moved from one machine to another in the same way a file is copied from disk to disk.

One of the side effects of recent increases in processor and peripheral power is that many of our systems are running at well under their full potential. Although modern operating systems are inherently multitasking, and can run many applications at once, there is always some interaction between them as the OS allocates resources and switches its attention between them. Also, events in one application that require memory, CPU time or file system resources can affect others on the same server: in extreme cases, a faulty or overloaded application can degrade or disable others.

The ideal system would be one server per application, affording maximum control and minimal interaction. It would also allow managers to run multiple operating systems, setting up each system for best performance for different tasks such as development, deployment and control. Unfortunately that's just too expensive for most environments. A more plausible solution however is server virtualisation.

Server virtualisation is a convenient and fashionable term for a number of ideas, not all of which mean the same thing. For example, Microsoft's IIS can be virtualised so that it runs multiple Web sites as if they were on different computers, but in fact they are sharing the same operating system. Other approaches to virtualisation, already common with more powerful non-PC hardware, operate at a lower level to such an extent that even the operating system is unaware that virtualisation is going on.

This kind of server virtualisation is the one most in fashion, and the one to which most attention is given. Instead of an operating system controlling all applications, software even more tightly bound to the hardware than the OS, lets the processor, memory and peripherals switch between multiple independent operating systems, each of which can act as an independent server and run their own set of services as if on a separate machine.

Despite some marketing spiel, this is nothing new. IBM mainframes have been configurable as multiple virtual machines for decades, but the ability of the PC architecture – both hardware and software – to do so has been limited by the original design decision that one computer ran one task for one user. Over the 20 years of continual development of the standard, though, this limitation has been continually eroded.

Now, virtual server products of considerable sophisication, such as VMWare's ESX Server and Microsoft's Virtual Server (purchased from Connectix), are getting close to the ideal, where a server's physical resources can be set up to create a number of virtual servers that appear to all intents and purposes to be running on different computers. They can have their own IP addresses, see disk and memory resources as theirs alone, run their own operating systems and act autonomously. One piece of hardware can run a mix of Linux, Windows, Unix or whatever, each instance operating as a full computer in its own right.

The primary advantages of this approach are scalability, reliability and efficiency – which boil down to saving money without sacrificing capability. Because a server is no longer tied to a particular piece of hardware, it can be backed up or moved from one machine to another pretty much as a file is copied from disk to disk. Adding extra processors to SMP machines, increasing memory or disk space, upgrades all the virtual servers running on those machines, and multiple servers can easily be coalesced onto one machine.

Management also changes dramatically. Control of many servers can be brought into one place, as you can make changes to the virtual hardware the software sees without having to go near the physical side. In fact virtual servers can be brought into existence or turned off without any hardware changes whatsoever – you can copy a server, make changes to it and test it out without affecting the live services at all, only switching over when you're satisfied that things are correct. That's a traditional and sensible approach to management, but now it can be done without having to find and configure extra hardware. ESX Server even promises to let servers move between hardware platforms seamlessly without halting, which has huge implications for reliability, disaster management or coping with changing service demands.

There are disadvantages – in the end, you can't get something for nothing and a physical server that's already operating at full tilt won't magically gain the ability to run extra tasks just by sprouting virtualisation. Virtualisation also places much more rigorous demands on hardware and the way it interoperates with operating systems – there are obvious conflicts waiting to happen if two servers make simultaneous or conflicting requests to the same device. Often, this can be resolved by the virtualisation machinery but with a speed penalty; the better the hardware is at coping with these situations without layers of software management interceding, the more effective the virtualisation will be. Also, although it is possible to produce very reliable, cost-effective systems with virtual server technology the reverse is also a danger – if a catastrophic failure hits one piece of hardware pretending to be ten servers, that's ten servers out for the count.

Future moves towards virtualisation include Intel's new Vanderpool technology. Details are sketchy, but it includes more explicit hardware support for virtualisation, as well as low-level software components to help OS designers to plan for efficient operation in a virtual environment. Likewise, both Intel and its competitors are fleshing out plans for multicore processors that present multiple independent CPUs in a package ideal for sharing in a virtual system.

As with storage and networking, virtualisation decouples hardware from software, and software from data, so that the only thing that matters is what you want to do, not where you want to do it. We are moving at some speed to a computing world where the limitations of local resources are removed and interconnection, rather than physical compartmentalisation, defines IT's potential.


Editorial standards