In a blog entry that I penned in late January, I wondered what Intel's forthcoming Vanderpool -- a hardware-based virtualization technology that will find its way into Intel's chips -- meant for virtualization solution provider VMware. Since writing that, the folks at VMware have been waiting to respond.
In a blog entry that I penned in late January, I wondered what Intel's forthcoming Vanderpool -- a hardware-based virtualization technology that will find its way into Intel's chips -- meant for virtualization solution provider VMware. Since writing that, the folks at VMware have been waiting to respond. While at LinuxWorld, I had a chance to catch up with Raghu Raghuram, VMware's Sr. Director of Strategy and Marketing. According to Raghuram, while VMware is happy to sell single-system virtualization technologies, it's moved well beyond a single system focus--which means that VMware has solutions that surpass other single system solutions such as Vanderpool. But Vanderpool isn't the only virtualization technology that could change VMware's future. Support for Xensource (a competing open source virtualization technology) from companies like Novell, Red Hat and AMDis piling up. Here's what he had to say:
What's the latest update on VMware?
Our underlying strategy has broadened significantly from virtualizing a single machine. In late 2003, we introduced a suite called Virtual Infrastructure. It uses a hypervisor layer of software on each system and then we can put a collection of systems together, each with ESX Server, and the entire thing relies on our VirtualCenter management tools (allows you to create and manage virtual machines across your entire server farm). The killer technology though is Vmotion. The way this works is that while a virtual machine is running, you can move it from one physical server to another with zero down time and no disruption. Also, one of the key benefits of VMware is that it takes hardware dependencies out of the equation. So, this movement can happen regardless of whether the system configurations are different. Without our software, doing this would require complete reconfiguration of the operating system to work with the new hardware.
Who needs something like that?
There are four reasons why you'd do this. First, any sort of planned maintenance of the hardware such as changing a board. Before a technology like ours, you had to bring down the users and the system. Now, you can move it on the fly to another box. The second reason is for resource allocation. Based on the application it's running, a single virtual machine could suddenly need more resources and they might not be available depending on what resources have been allocated to the other virtual machines on that box. With Vmotion, you can move that virtual machine to another system that has the spare resources to support the application. The third reason is a scheduled version of number two. So, for example, to account for end-of-the-month activity or any planned peaks, you can schedule virtual machines to automatically move. The last one, done in conjunction with server vendors who monitor things like fan speeds and temperature sensors on the box in hopes of anticipating a failure, if those algorithims sense a failure coming, using a Web services interface, their management software (ie: IBM Director, Insight Manager, or OpenManage) notifies VirtualCenter, and VirtualCenter dynamically moves the partition.
And, compared to Vanderpool?
Vanderpool -- at least the first generation of it -- virtualizes the chip's instruction set. But it doesn't do some of the things that VMware does such as virtualizing of the memory or I/O systems. Also, there's no management component like VirtualCenter nor can the virtualization work across systems the way we're doing it. That said, we are collaborating with AMD and Intel. So, by using their virtualization technology in combination with ours, we'll be able to decrease the time it takes to virtualize systems.
Your products are priced by CPU. Will VMware change models when multi-core processors come out?
No. Pricing will be by the core. So a dual core or four core CPU will be treated as two or four CPUs.
What about Xensource's virtualization technology. It has a lot of buzzand apparently a lot of support from the vendor community as well (including some of your partners).
Our solution is cross platform and Xen is only for Linux. We do a lot of business on Windows and a lot on Linux. Also, Solaris is supported on an experimental basis in our GSX and Workstation products. Experimental status is how we introduce support for new operating systems and the over time, we may support it. And with BSD, we have supported that right from the getgo. We work with all major versions of Linux. Our ESXServer is certified to run on Red Hat and SuSE. To run Linux on Xen, you must make modifications to kernel. That may make Linux able to run on Xen, but not necessarily on all hardware and vice versa. Take regular Red Hat Enterprise Linux 3. You cannot run it on Xen.
But surely, with all the support Xen looks like it will be getting from major industry players, those problems will get worked out and Xen will be way more viable than it is right now.
It still isn't cross platform. We believe in an operating system-independent model, which means that the virtual machine layer stands separate from the OS. You can run legacy old versions of Windows, old versions of Linux... it doesn't matter. No datacenter will be on the latest and greatest version of a particular operating system.
But for shops running Linux, Xen's open source nature gives it a cost advantage as well doesn't it?
I don't know what the cost will be but there will be some cost. Xen, by virtue of the fact that it's open source technology, is good for people who are attracted to open source technologies. Value of virtualization is severely curtailed, however, once you realize that it's purely for Linux.
You keep emphasizing the multiplatform support. Linux is fairing pretty well against Windows in the datacenter. Is it safe to say that your strategy depends on ongoing OS heterogeneity in the datacenter?
Look 5 to 10 years out to the point that Xen becomes a stable platform. So we have a rich set of virtual services now, and Xen is just starting out. It's where we were in 1998. VMware moving on to other things while Xen still working on what VMware worked out a long time ago. The bottom line, though, is that customer success is what matters and we don't think Xen is a viable alternative right now.