Virtualization is perhaps the hottest topic in IT at the moment. The technology was already making strides with the promise to cut up to three-quarters of server-room hardware and energy costs, before Microsoft weighed in and the hype shot off the scale.
The technology is changing the way servers are sold and used. It takes the mass of processors, memory, storage and networks in the average server room and masks their complexity, so applications run on virtual machines. Each app appears to have a machine to itself, while the underlying resources are shared between the virtual machines.
This approach is a lot easier and more efficient than running separate machines. If you want a large processor, virtualization can pull together resources on multiple servers. If you want lots of small ones--say, for remote desktops--you can run lots of virtual machines on one server. If you want to run multiple operating systems, each one gets its own virtual machine.
That concept is revolutionary, but old. You could argue that it goes back to Alan Turing's concept of a universal machine. By the 1970s, virtualization was well established on mainframes and other big servers.
IBM's VM operating system--the VM stood for virtual machine--ran a control program or hypervisor, which gave each application a copy of the actual machine, with its own address space. Virtual machines shared the expensive resources on the mainframe efficiently.
Then in the 1990s, a revolution happened. Smaller servers based on processors using Intel's x86 instruction set popped up everywhere. They were fast and cheap enough to deploy for almost every application but that made for inefficient use of memory, storage and power--most ran at less than 21 percent utilization, according to research into European virtualization by analyst firm IDC. They were also a management nightmare.
Today's virtualization movement started in 1999, when start-up company VMware brought the technology to the x86 architecture. IT managers wanted to consolidate their servers, and VMware's ESX hypervisor promised to be a tool that could do just that. Virtualization vendors claim consolidation ratios of 4:1--potentially reducing up to 75 percent of the infrastructure in a server room. In IDC's survey, most virtualization users claimed to have achieved that ratio.
VMware has ridden the virtualization wave: 35 percent of servers bought in 2007 were virtualized, and 52 percent of those bought in 2009 will be, according to IDC. The same survey found 82 percent of those servers are virtualized with VMware.
This sort of success didn't go unnoticed. Storage giant EMC bought VMware for $625m (£348) in 2004 and three years later floated 10 percent of the virtualization company in a partial IPO, picking up $1 billionn.
Competition appeared. VMware's biggest rival came from the open-source world. A project based at the University of Cambridge developed the Xen hypervisor as free software and spun off a commercial company, XenSource, to manage the code and develop commercial products based on it.
Companies including Sun, Red Hat and Oracle have produced commercial products based on Xen. Sun's xVM Server launched this month. Citrix bought XenSource in 2007 and adopted its XenServer products. How many Xen-based hypervisors are out there? No-one knows, because they are free or bundled with other systems. The IDC survey found only three percent of European virtualization was being performed with Xen but, given the hidden nature of hypervisors, it is likely that the real figure is higher.
While this was happening, chipmakers joined in. Early x86 hypervisors were inefficient, because the hardware wasn't designed to support virtualization. The hypervisors had to translate calls to the hardware on the fly--this is called 'paravirtualization'.
The drawback was that operating systems had to be modified to run as guests, and features such as power management were not completely available, so these virtualization schemes delivered less efficiency than users might have hoped.
Intel and AMD extended the instruction sets of their newer processors to give ever greater support for virtualization--AMD's effort is called AMD-V, while Intel's technology is called VT. The Intel Xeon 7400 Dunnington processors, launched this week, include FlexMigration, which allows virtual machines to be moved around easily in a server pool.
With all server processors supporting virtualization, it's clearly going to be a standard part of operating systems. IDC thinks a small majority of next year's new servers will be virtualized from the day they are installed, so users will want to virtualization built in to the operating system.
Red Hat has predicted that the basic hypervisor function will be free. It's a viewpoint partly driven by the company's choice of hypervisor. While still supporting Xen--as it has for the past seven years--Red Hat is using the open-source KVM hypervisor, created by Qumranet--recently bought by Red Hat--to embed virtualization in its kernel. While the company gets that going, however, it's still working with Xen.
But the biggest news this year has been from Microsoft. Never that positive on virtualization, which usually means fewer operating system licenses, Microsoft had dabbled with a free hypervisor, Virtual Server, for its 32-bit operating systems. Now the company has bitten the bullet, and launched Hyper-V, essentially for free with Windows Server 2008.
Analyst firm Ovum has worked out the real cost of a managed Hyper-V installation at $21,000 for five hosts, compared with a VMware price of $61,000. "Microsoft's pricing underlines that its assault on the server-virtualization market will begin at the low end, initially presenting VMware with much less competition at the high end," said Timothy Stammers, senior analyst at Ovum.
At the moment, Microsoft's hypervisor has a very limited guest list. It only welcomes Windows and the Suse Linux from Microsoft partner Novell, compared with a vast list supported by the established players, Xen and VMware. But Hyper-V should be an easy option for Microsoft-only IT departments, giving them a Microsoft-approved way to dabble in Linux.
From now on, as befits a technology that masks the underlying complexities of hardware, the details of virtualization technology will become unimportant, compared with the overall price, the management tools, and the way it matches users' specific needs.
The most significant example is cloud computing. Cloud service providers need to switch on servers for the clients quickly and adjust their resources instantly. Both Citrix and VMware have launched versions of their technology tuned for cloud providers this week. Other areas where virtualization schemes will compete include storage, the ensuring of high availability, and disaster recovery.
Virtualization holds the promise of bringing order and efficiency to today's servers. But the struggle is still on to become the main supplier to deliver that boon.