'

Hyper-V

  • Editors' rating
    7.0 Very good

Pros

  • Easy to install
  • Compatible with a huge range of servers and I/O adapter cards
  • A simple option for organisations wanting to test server virtualisation

Cons

  • VMs could be affected by software problems with the host Windows server
  • All VMs must be temporarily halted if host server needs a reboot following patching
  • Need to buy extra software to manage Hyper-V server farms
  • Requires a 64-bit Intel VT- or AMD-V-compatible CPU

Microsoft released the first production version of its Hyper-V hypervisor on 26 June. Supplied as part of the Windows Server 2008 bundle, Hyper-V enables a single piece of server hardware to run multiple operating systems. Most people will use it to run several versions of Windows on the same server, but it could also be used to host Linux software, or just about anything else that runs on an x86-compatible CPU. However, organisations wanting to convert existing servers to run as virtual machines (VMs) or import VMs from other vendors' products will need to buy additional software.

We tested Hyper-V running on a workstation fitted with an Intel Core 2 Quad Q6600 CPU running at 2.4GHz and fitted with 3GB of RAM. We found the software easy to set up and use, and it did a good job of hosting both Windows and Linux software.

Having said that, Hyper-V is closely coupled to Windows Server 2008 and cannot be installed without it. On the upside this means that Hyper-V works with all the network cards (NICs), SCSI controllers and Fibre Channel HBAs that are supported by Windows. On the downside it also means that all your VMs must be temporarily halted if the host server needs a reboot after applying software updates.

Hyper-V also requires an x64 CPU and either Intel VT or AMD-V hardware virtualisation support. This is in contrast to the current market leading hypervisor, VMware's ESX Server, which works with just about any x86 architecture CPU and rarely needs patching or rebooting. However, ESX Server needs special drivers for NICs and other I/O cards, so it's compatible with a smaller set of server hardware.

We tested Hyper-V by installing Windows Server 2008 Enterprise Edition x64 onto an empty 100GB partition, using the Full Installation option. Microsoft has said it will make a Server Core version available later this year. This could be used to build a Hyper-V system without the memory and disk overhead of the full Windows Server 2008 GUI, and would cost around $28 (~£15) to license. However, the true cost would not be negligible as it would still need to be managed remotely via a network connection, using a full Windows Server 2008 system. Organisations running more than one Hyper-V system would probably also buy System Centre Virtual Machine Manager 2008. Currently this does not support Hyper-V, but Microsoft has said it will provide an update this year that will add this feature.

With our fresh installation of Windows Server 2008 finished, we configured Windows Firewall to block all incoming connections. This was particularly important as our Windows server would be connected directly to the internet via a simple ADSL modem that did not have a firewall to protect the server.

Top ZDNET Reviews

Next we connected our server to the ADSL modem and used the Automatic Update feature in the Windows Initial Configuration Tasks applet to update our system with the latest patches. This replaced the pre-release version of Hyper-V supplied on the Windows Server 2008 installation DVD with the finished version that's certified for use with production systems. One reboot was required to complete this process.

With these steps completed we were ready to use the Add Roles Wizard to install and configure our server to run Hyper-V. The wizard asked which NICs were to be used to create external virtual networks that could be used by VMs. Like many systems that will be used to run Hyper-V, our lab system had only one NIC installed, so we had little choice but to select this one for use by our VMs.

The wizard also recommended that at least one NIC be reserved for remote access to the server. This could be a little disconcerting for the unwary as the wizard also said that this remote-access NIC could not be used by VMs. Clearly we had no choice but to proceed without a remote-access NIC. With these configuration options in place, we proceeded with the Hyper-V installation, which completed in less than two minutes. Two reboots were required, and when we logged back into the Windows Server desktop we checked the Update History using a link in the Initial Configuration Tasks applet and confirmed that our system was running the production version of Hyper-V.

Windows Server 2008's Server Manager applet has a three-pane interface, with a hierarchy on the left, details in the middle and actions on the right.

At this stage we could see a new Hyper-V option in the Server Manager (SM) applet, which could be used to manage Hyper-V setups running locally or on other servers via a network connection. The Windows Server 2008 version of SM provides a three-panel user interface, with the left panel providing a hierarchical view of the roles and features running on the server, and the central panel showing a detailed view of the selected item. The right-hand Actions panel provides links to relevant functions. Double-clicking on the Hyper-V Manager option in the left-hand panel expands its hierarchy, which allowed us to select Hyper-V running on the local server.

Before adding any VMs, we wanted to make a few changes to the basic Hyper-V configuration. For example, we used the right-hand Actions panel to launch the Virtual Network Manager and add an 'internal' virtual network, which promptly appeared in the Windows Device Manager as an extra NIC alongside the physical NIC and the 'external' virtual network adapter we created during the installation of Hyper-V. Internal virtual networks can be used by VMs and the host server. We could also create a 'private' virtual network, which could be used only by VMs running on our server.

To illustrate the difference between the 'internal' virtual network and the 'external' one created during installation, we created a VM to replace Windows Firewall with a Linux-based alternative.

The New Virtual Machine wizard lets you configure a VM to install an OS from an ISO image — in this case, IPCop.

We used the New option in the SM actions panel to launch the New Virtual Machine wizard, which created our VM and provided some configuration options. For example, the wizard let us configure the VM to install an operating system from an ISO image of the installation CD for IPCop. The wizard wouldn't let us configure our VM with Legacy NICs, so we created the VM without NICs and then used the actions panel to edit its settings to add two Legacy NICs, one connected to the internal VM network, the other to the external VM network. Legacy NICs emulate a popular DEC NIC, and can be used by just about any operating system that has a driver for this card. 'Enlightened' operating systems can use more efficient non-legacy, or enlightened-mode NICs, provided that a suitable 'driver' is available for that OS. Currently Microsoft supplies such drivers for Windows Vista, Windows Server 2008 and Windows Server 2003. The drivers are supplied as part of the Integration Services Suite, which must be installed in a VM separately after the VM OS has been installed.

Installing the IPCop Linux-based firewall on a new virtual machine.

With the VM created we could right-click on it in the central SM panel to get a drop-down menu, and then click on the 'Connect...' option. This opened a window onto our VM, complete with control buttons to start, stop and pause it, and to provide a screen display. We used the Start button to boot this VM, and watched the screen display as it loaded the IPCop installation software.

Unticking the TCP/IP v4 and v6 bindings from the server's physical NIC prevents Windows Server 2008 from accessing it, allowing a broadband modem to assign a public IP address to a Linux firewall VM via DHCP.

During the IPCop installation we assigned a static class C address to the 'internal' VM NIC. This would be the LAN address of the firewall. We configured the other VM NIC to obtain its address using DHCP. This NIC was connected to the external virtual network, which was in turn connected to our broadband modem. The broadband modem would assign our public IP address to the firewall using DHCP. However, for this to happen reliably we also needed to prevent our Windows Server 2008 system from using the physical NIC, which we did by removing (unticking) the TCP/IP v4 and v6 bindings from the NIC using the Windows Control Panel. Finally we configured our Linux firewall as a DHCP server to provide LAN IP addresses to computers attached to the internal network.

The end result was a VM running Linux firewall software that would provide LAN IP addresses to our host server and to any of its VMs, provided these systems were configured to use the internal virtual network and to use DHCP. The host server and its VMs would connect to the internet via the Linux firewall. Had we used a 'private' network instead of an 'internal' one, our setup would have allowed a number of VMs to run inside an extremely secure DMZ, protected by the VM running Linux firewall software from network traffic from the LAN and the host server. This configuration would have required a second NIC to be fitted in the host server so that it could access the LAN.

During our tests the Windows Control Panel reported four CPUs and 3GB of RAM fitted in our system, regardless of the CPU and RAM requirements of the VMs it was running. This suggests that Windows Server 2008 rather than Hyper-V was managing the hardware used by virtual machines. This is in contrast to VMware ESX Server, where the Linux-based console operating system normally reports one CPU and around 512MB of RAM, even though the server hardware could be configured with, for example, four CPUs and 3GB of RAM. With ESX Server, the VMware hypervisor, rather than the console operating system, is clearly managing the CPUs and RAM used by the VMs. The Hyper-V architecture also runs a stack of virtualisation software, including the VMBus used by all the VMs, in the parent Windows Server 2008 partition. This means that all VMs on a server could be badly affected if the parent partition crashed or became unusable because some software consumed too many resources.

We tested this by making our host server run a memory benchmark configured to 'realtime' process priority in Task Manager. In this test, it took over 32.1 seconds for a Vista VM to launch Internet Explorer and open Google, compared to 2.5 seconds when the server was not running such a workload. Again, this is in contrast to the VMware architecture, where a problem with the console operating system is unlikely to affect all the VMs hosted by the hypervisor. Our test results lead us to recommend running only the barest minimum amount of software in the host Windows Server 2008 environment.

In our tests on a quad-core workstation, we found that Hyper-V could allocate four virtual processors to its VMs. And as it stands, this first version of Hyper-V does not make USB devices available to VMs. In addition, we could not drag and drop files from the host Windows Server GUI onto the desktops of VMs running Windows. However, Hyper-V will automatically pause and resume VMs when the host server is started or shut down. All this in line with what you'd expect from VMware ESX Server. However, VMware’s desktop virtualisation product, Workstation 6, enables VMs to use USB devices and allows files to be dragged from the host PC's desktop and dropped onto the desktop of a VM running Windows.

 

Top ZDNET Reviews