X
Business

Virtualisation suites compared

Getting the foundation right for cloud means succeeding in virtualisation, but with multiple products available, which one is right for your business?
Written by Steven Turvey and Thomas TestLab, Contributor

Before we launch into this round up, it's time for a trip down memory lane. Enex TestLab has been involved with virtualisation technology since 2004, testing and evaluating a variety of flavours privately for organisations, as well as for publication. During this time, many concepts have evolved to more sophisticated levels, and the market for virtual technology has matured. In those early days, there really was just one pioneer: VMware.

But VMware was soon joined by vendors such as Microsoft, and the open-source community stepped up with the Xen Hypervisor, which was ultimately acquired by Citrix. VirtualBox evolved as a Sun solution (now Oracle), and the base package is still available under open-source licensing.

virtualisation-img1-v3.jpg

Today, there are multiple types of virtualisation, which are sometimes confused and often lumped into the same basket. The very basic local system-based application virtualisation is where applications are essentially segmented and launched individually. Early proponents of this technology were AppSense and Sun. The primary benefits of this individual application type of virtualisation are security, development and platform independence. It's a technology well suited to computing environments of thin or low processing power.

Desktop virtualisation followed, enabling enterprises to really control their Standard Operating Environments (SOE) and manage their licensing. It also improved patch management and administration through more central command and control capabilities. This has been one of the holy grails pursued by the likes of Microsoft, with the support of Intel and its vPro embedded technologies.

The next step up is server virtualisation, and this is really where architects and administrators have been empowered to divorce server applications from the underlying hardware. It provides for far more robust datacentres, and enables redundancy, portability, scalability, availability and much more.

In this feature, we round up the common virtualisation vendors, and look at the good, the bad and the bottom line for each. VMware is joined by Citrix, Microsoft and Oracle.

Citrix XenServer 6.0.201

Citrix has been working hard to improve its features and usability over previous incarnations of XenServer 5.6 in an attempt to overtake VMware. In some respects, it also eclipses Microsoft's Hyper-V.

XenCenter General and new VM
Citrix XenServer 6.0.201
(Credit: Enex TestLab)

It now tips the scales at 658MB (on the install CD). While it's a lot less than Server 2008 Hyper-V, it's still a good deal larger than VMware's vSphere, until you factor in that the CD also includes the hypervisor and XenCenter management utility. Once you start adding up the extra VMware bits and bobs, it is actually the leanest of the three.

It is also worth noting that XenServer takes a significantly different approach to its hypervisor, compared with Microsoft and VMware. The latter two predominantly use proprietary drivers and abstraction layers, whereas XenServer works with the hardware and existing drivers to simplify and speed up the hypervisor interaction with physical hardware.

For example, the XenServer control domain makes use of standard open-source device drivers, which should result in broader hardware support (although this is potentially a downside, due to a lack of vendor collaboration in driver hardening and/or patching). As another example, rather than using a proprietary file system, XenServer uses the native storage file system; VM snapshot requests are offloaded directly to the storage area network vendor's API.

Two separate physical boxes are required to run the XenCenter application and the XenServer host. The XenCenter machine requires a Microsoft Windows operating system — Windows 7, Windows XP, Windows Vista, Windows Server 2003, Windows Server 2008 or Windows Server 2008 R2 (all editions and versions).

Set-up is great. It's the easiest and most painless of the three hypervisors to install. The same CD is also used to load the XenCenter management console on a Windows-based PC.

To its credit, XenCenter is remarkably easy to use, and has a very clean interface; we consider it to be more user friendly than either Hyper-V or VMware. Creating, backing up and copying VMs is a doddle, as is adding other host servers to the cluster and generating performance statistics.

Host specifications are quite similar to Hyper-V — the limits guide notes that up to 130 logical CPUs are supported on a host machine, but this is dependent on the physical CPU type and 1TB of RAM. On the other hand, VM RAM is a little better than Hyper-V, but still lagging VMware at 128GB for Windows. A physical GPU can also be assigned to a VM, so the guest can leverage GPU instructions, which is very useful for delivering 3D graphical applications via virtual desktops.

There is a maximum of 16 nodes per cluster, and up to 800 VMs and dynamic memory allocation amongst VMs is supported.

Version 6 has improved guest OS support, including Ubuntu 10.04 (32/64-bit); updated support for Debian Squeeze 6.0 (64-bit), Oracle Enterprise Linux 6.0 (32/64-bit) and SLES 10 SP4 (32/64-bit); and experimental VM templates for CentOS 6.0 (32/64-bit), Ubuntu 10.10 (32/64-bit) and Solaris 10.

The improvements in virtual networking are in distributed virtual switching. A new fail-safe mode allows Cross-Server Private Networks, ACLs, quality of service (QoS), RSPAN and NetFlow settings to continue to be applied to a running VM in the event of vSwitch Controller failure.

A memory-over-commit feature is available, and is called Dynamic Memory Control (DMC), which is a "ballooning" operation, and is only available in the XenServer Advanced or higher editions. Ballooning is when the hypervisor is running low on memory, and it sets a target page into which the balloon driver will "inflate", creating artificial memory pressure within the VM, and causing the operating system to either pin memory pages or push them to the page file. However, it is not as mature as the memory-management of VMware, which uses three mechanisms for memory management: transparent page sharing (TPS), ballooning and compression.

Citrix has a powerful provisioning service that allows server workloads to be provisioned and re-provisioned in real time from a single shared disk image. This streamlines operations for administrators, as they can simply patch the master image. Dynamic workload streaming is particularly useful, because peak load periods and even migration between a testing and production environment can be catered for.

Fault tolerance is well supported, and a VM can be automatically restarted on another server, should a host fail. Or, if desired, a VM can be mirrored on another host for seamless failover. VM snapshots can be scheduled and archived, but high-availability features are only on XenServer Advanced Edition or higher.

For enterprise environments using XenDesktops with IntelliCache, or VMs protected via high-availability features, there is a limitation of 50 VMs or XenDesktop VMs per host.

XenServer is able to balance workloads, and it supports two optimisation modes. Performance Optimisation ensures that a minimum performance level is maintained, while Density Optimisation places the VMs on the minimum number of hosts.

The requirement for a separate licensing server, as with other Citrix products, is still true for XenServer. The "grace period" feature for disconnection of VMs and hosts from a licensing server (a non-receipt of a five-minute heartbeat message from the licence server) permits continued operation for up to 30 days without reconnection.

To ensure a seamless and simple migration across physical hosts, XenServer also supports virtual network switching.

Microsoft Windows Server 2008 R2 SP1 Hyper-V

Microsoft has been playing catch-up with VMware and Citrix, with the current version of Hyper-V certainly stepping up to the mark as a strong contender. It is, however, a big installation; almost 3GB (or up to 10GB for the full server installation), whereas the other two use a Linux underpinning and happily reside on a CD (at least for the basic hypervisor).

virtu1
Microsoft Windows Server 2008 R2 SP1 Hyper-V
(Credit: Enex TestLab)

Microsoft's approach is to install Windows Server 2008 R2, and then install Hyper-V as a Role, which is really quite a simple process. The Hyper-V Manager is easy to drive, and it has a simple and logical layout. Creating and configuring the VMs is also easy, and pretty much any operation you would want to carry out can be achieved via the manager. However, in a large cluster, Hyper-V Manager is simply not sufficient; you cannot automate or run tasks in a batch mode, so be prepared for lots of pointing and clicking.

To take the pain out of managing a large infrastructure, the Microsoft System Center Virtual Machine Manager (MSCVMM) is the way to go. It removes the need for repetitive tasks. The MSCVMM is, incidentally, also able to manage VMware's ESX Server.

Hyper-V is big on features, although in some instances it does lag behind the latest version of ESXi. For example, each host can have a maximum of 64 physical CPUs and 512 vCPUs, whereas ESXi supports a maximum of 160 logical CPUs and 2048 vCPUs per host.

VM virtual processor support is naturally dependent on the OS, but is limited to a maximum of four per VM.

Processor compatibility mode allows VMs to migrate across hardware, where the physical hosts can have different CPU architectures. This feature is new to Hyper-V; in the previous version, hosts had to contain identical CPU architectures, which meant that you could migrate across Intel to Intel or AMD to AMD hosts, but not Intel to AMD.

Physical memory per host is a healthy 1TB, but the maximum per VM is just 64GB. However, Hyper-V does feature dynamic memory, where the maximum and minimum RAM can be specified, and allocated memory can grow or shrink depending on the VM's needs. VMs can be assigned priority levels, so that when the host begins to exhaust physical memory, the RAM allotment to the VMs will be reduced based on their priority.

The size of a Hyper-V cluster is limited to 16 nodes in a failover cluster, with a maximum of 1000 VMs and a limit of 384 virtual nodes per physical machine. The maximum number of VMs allowable per node does not change, regardless of physical cluster size.

Guest OSes include Windows and flavours of SUSE, Red Hat and CentOS; other versions of Linux are unsupported, but many are reported to run without any issues.

High availability requires confirmation of the "Certified for Windows" test during implementation — largely requiring identical specifications for hardware nodes in both operating system varieties, CPU families and interfaces, such as networking and host adapters. Servers must also be members of the AD domain, necessitating a domain controller somewhere in the schema.

The "Live Migration" feature, enabled by the (new to Win2K8R2) Cluster Shared Volumes (CSV), recommends use of a private network for migration traffic. This is in addition to the private network requirement for internal cluster communication, separate virtual networking provision and separate storage network.

Virtual Networking follows the standard virtual switching approach, with the decoupling of the OS network stack to allow better throughput, although I/O performance will depend on the number of VMs attempting to communicate with the outside world.

For load-balancing services, the standard Microsoft Network Load Balancing (NLB) component is required, and is configured in the same manner as for physical nodes.

The inclusion of "snap shotting" via the Hyper-V Manager makes some inroads on the immaturity of the Microsoft product, which sports all of the features expected for taking, managing and redeploying snapshots to live VMs. While the possibility exists for automation via scripting in the snap-shotting feature, it is principally for use in test and development environments, and not ideal for a transactional production infrastructure — it certainly should not be considered as being the only disaster-recovery (DR) solution in a production environment.

VMware vSphere ESXi 5

VMware is an old hand in the virtualisation arena, so taking a peek at its products on the web can leave your head spinning, as there's such a wide range of applications. One trap that the unwary can fall into is that some of the features described are not available on the standard product; they require additional purchases to plug in the additional functionality that you may require.

VMware1
VMware vSphere ESXi 5
(Credit: Enex TestLab)

For many, vSphere ESXi is the cream of the crop, and the other vendors are simply playing catch-up. While it is true that VMware has a product for every scenario, some of the other vendors' products can be a perfect fit in terms of features suiting a particular infrastructure and scenario.

VMware is not quite as easy to set up as XenServer, for example, but it is still relatively quick and painless. The resultant interface on the host server is pure Linux CLI, and to facilitate the remote management of the host, a vSphere Client must be installed on a Windows PC as a minimum.

The client interface is clean and simple to navigate, so setting up and managing VMs is also a simple proposition. However, to ensure full management of a large-scale VMware virtual infrastructure, vCenter Server must be installed, which involves an additional cost. VCenter is a one-stop management tool, and the only tool you will need. It effortlessly manages tasks such as VM migration, load balancing and high availability, to name a few.

As previously mentioned, VMware is feature rich, but aspects such as fault tolerance are only available on Enterprise Editions and above. Disaster recovery requires the Site Recovery Manager plug-in, and virtual distributed switching requires vSphere Enterprise Plus.

For the high-availability requirements of large-scale enterprise, the VMware advanced storage management component, VMFS, is a cluster file system that leverages shared storage to allow for multiple vSphere hosts to read and write to the same storage concurrently. It provides live migration of running virtual machines from one physical server to another, automatic restart of a failed virtual machine on a separate physical server and clustering virtual machines across different physical servers.

For reliability of the platform, driver hardening is maintained as a collaborative exercise with hardware vendors, where the Microsoft and Citrix products rely on generic Windows or Linux drivers.

VSphere is the leader in terms of ultimate scalability; each host can sport up to 160 logical CPUs, 2TB of RAM and an impressive 2048 vCPUs, all shared amongst a maximum of 512 active VMs per host. The specs of individual VMs are equally impressive, with 32 vCPUs and up to 1TB of RAM. A cluster can consist of 32 nodes, with a total of 3000 VMs.

The ability to discretely manage these components for each unique VM is the true strength of VMware. The lack of reliance on a base operating system to translate and interface eliminates the I/O bottlenecks experienced by the other two products.

Oracle VirtualBox 4.1.18

Oracle's VM VirtualBox is a desktop-virtualisation environment that's compatible with x86 and AMD64/Intel64. Although it is the only free open-source virtualisation tool available at a professional level, it is not a direct competitor to the other three virtualisation implementations. Those are aimed at large IT infrastructures, while VirtualBox is targeted to personal or small-office use.

VBox
Oracle VirtualBox 4.1.18
(Credit: Enex TestLab)

Oracle VM VirtualBox version 4.1.18 supports Windows, Linux, Macintosh and Solaris hosts, and supports a large number of guest operating systems, including Windows (NT 4.0, 2000, XP, Server 2003, Vista, Windows 7), OS X, DOS/Windows 3.x, Linux (2.4 and 2.6), Solaris and OpenSolaris, OS/2 and OpenBSD as host operating systems. Hardening the guest operating system is achieved through "Guest Additions", which are driver or patch packages to improve the compatibility and functionality.

VirtualBox can present up to 32 virtual CPUs to each VM, irrespective of the physical CPU cores present on the host device. Configurable Physical Address Extension CPU compatibility allows 32-bit operating systems to address greater than 4GB of memory. Some Linux OSes (such as Ubuntu) require this to be enabled to permit virtualised operation. VCPU hot plugging allows "on the fly" expansion of CPU resource to a given VM. SAN boot capability is available, dependent on a guest OS using PXE boot and iSCSI targeting via the host (using experimental features).

Installation

Installation is very straightforward. VirtualBox (being a type-two hypervisor) was supplied for testing as an executable application to be installed upon an existing Windows 7 OS. The installation wizard guides you through the install without issues, delivering a very user-friendly interface to directories and registers.

Virtualisation

When VirtualBox is executed for the first time, a nice wizard guides you through the virtualisation process. Firstly, you specify the name and OS type for your VM. You must also allocate RAM to be used by your VM; the base amount is recommended depending upon the guest OS selected. The maximum RAM is dependent upon the maximum allocation amount that won't also affect the host PC performance. A virtual hard disk is then created by the installation wizard, and the operator must select either a dynamic or fixed-size image. A dynamically expanding image will occupy a smaller amount of space on your physical drive. It will then grow dynamically up to your specified VM drive size. A fixed-size image will not grow. It is stored on your physical drive as a file of approximately the same size as the specified VM's hard drive.

Once your VM has been created, it will boot as a blank machine within the VirtualBox client. Once the VM has booted, you can specify the disk drive to install your OS as either your physical disk drive (which contains your bootable media) or as an ISO image contained somewhere on your hard drive. After the media path has been specified, the OS will boot and install as normal.

Access to host files from a guest is a complicated process, as there is no drag-and-drop support between the VM and the physical hard drive. Instead, file sharing relies upon shared folders, and this can be a complicated process that requires the Guest Addition to function.

VirtualBox supports full virtualisation within its client, which allows complete operating system functionality from the guest. All features pertaining to each VM are easily altered within the VirtualBox client, such as RAM, allocated video memory and hard-drive size.

Features

Windows, Linux and OS X versions are available as two configurations; one partly proprietary and one fully open source. The open-source version — VirtualBox Open Source Edition (OSE) — lacks the ability to use USB peripherals, and includes the open-source VNC protocol instead of Microsoft's RDP.

VirtualBox does not have a limitation on how many VMs can be installed on one PC, so the only constraints are host hard-drive space and host RAM allocation.

The VirtualBox supports the following guest systems:

  • Windows NT 4.0: all versions, editions and service packs are fully supported. There are some known issues with older service packs; SP6a is recommended. Limited guest additions are available

  • Windows 2000/XP/Server 2003/Vista/Server 2008/Windows 7: all versions, editions and service packs are fully supported (including 64-bit versions, under the preconditions listed below). Guest Additions are available

  • DOS/Windows 3.x/95/98/ME: limited testing has been performed. Use beyond legacy installation mechanisms is not recommended. Guest Additions are not available

  • Linux 2.4: limited support

  • Linux 2.6: all versions/editions are fully supported (32-bit and 64-bit). Guest Additions are available. Kernel 2.6.13 or higher is recommended, kernel prevention of VM operation notwithstanding

  • Solaris 10, OpenSolaris: fully supported (32-bit and 64-bit). Guest Additions are available

  • FreeBSD: requires hardware virtualisation to be enabled. Limited support. Guest Additions are not available yet

  • OpenBSD: requires hardware virtualisation to be enabled. Versions 3.7 and later are supported. Guest Additions are not available yet

  • OS/2 Warp 4.5: requires hardware virtualisation to be enabled. Only MCP2 is officially supported; other OS/2 versions may or may not work. Guest Additions are available with a limited feature set.

VirtualBox supports 64-bit guest operating systems, and even 32-bit host operating systems, provided that the following conditions are met:

  1. You need a 64-bit processor with hardware-virtualisation support

  2. You must enable hardware virtualisation for the particular VM that you want 64-bit support for; software virtualisation is not supported for 64-bit VMs

  3. If you want to use 64-bit guest support on a 32-bit host operating system, you must also select a 64-bit operating system for the particular VM. Since supporting 64 bits on a 32-bit host incurs additional overhead, VirtualBox only enables this support on explicit request

  4. On 64-bit hosts (which typically come with hardware-virtualisation support), 64-bit guest operating systems are always supported, regardless of settings. But for 64-bit operation, the Advanced Programmable Interrupt Controller (APIC) must be enabled, particularly in the case of 64-bit Windows guests. Windows VMs also require the use of the Intel NIC driver. AMD is not supported.

Limitations

The following guest SMP (multi-processor) limitations exist:

  • Poor performance with 32-bit guests on AMD CPUs. This affects mainly Windows and Solaris guests, but possibly also some Linux kernel revisions. This has been partially solved in version 3.0.6 for 32-bit Windows NT, 2000, XP and 2003 guests. It requires version 3.0.6 or higher Guest Additions to be installed

  • Poor performance with 32-bit guests on certain Intel CPU models that do not include virtual APIC hardware optimisation support. This affects mainly Windows and Solaris guests, but possibly also some Linux kernel revisions. This has been partially solved in 3.0.12 for 32-bit Windows NT, 2000, XP and 2003 guests. It requires 3.0.12 or higher Guest Additions to be installed

  • 64-bit guests on some 32-bit host systems with VT-x can cause instabilities to your system

  • For basic Direct3D support in Windows guests to work, the Guest Additions must be installed in Windows "safe mode", with manual intervention to prevent Windows system DLL restoration. But this does not apply to the experimental WDDM Direct3D video driver, which is available for Vista and Windows 7 guests that are shipped with VirtualBox 4.1

  • On Windows guests, a process launched via the guest control execute support will not be able to display a graphical user interface unless the user account under which it is running is currently logged in and has a desktop session

  • Standard support for use with accounts that have no password; it requires group policy intervention to enable GUI access

  • The VBoxManage modifyhd compact command is currently only implemented for VDI files. At the moment, the only way to optimise the size of a virtual disk image in other formats (VMDK, VHD) is to clone the image, and then use the cloned image in the VM configuration

  • OVF localisation (multiple languages in one OVF file) is not yet supported. Some OVF sections, like StartupSection, DeploymentOptionSection and InstallSection, are ignored.

Some VirtualBox features are labelled as experimental. Such features are provided on an "as is" basis, and are not formally supported. The list of experimental features is noted as follows:

  • WDDM Direct3D video driver for Windows guests

  • Hardware 3D acceleration support for Windows, Linux and Solaris guests

  • Hardware 2D video playback acceleration support for Windows guests

  • PCI pass-through (Linux hosts only)

  • Mac OS X guests (Mac hosts only)

  • ICH9 chipset emulation

  • EFI firmware

  • Host CD/DVD drive pass-through

  • Support of iSCSI via internal networking

  • Synthetic CPU reporting.

The bottom line

The bottom line is, as always: the product that suits you most, for the right price, is best. When you break it down, these are all offerings with calibre. VirtualBox is an inexpensive path, but it's really only suited to an individual or small business. Between the other three, there are key features and capabilities to consider.

When it comes down to it, our first choice would be VMware for the larger enterprise infrastructure, as it simply has more scalability than Microsoft Hyper-V or Citrix XenServer and is a more mature product. Price might also be less of a concern when you consider its feature set. However, the other products should not be overlooked. Each has great points to consider, and might actually suit your needs better when it comes time to reach into your pocket. Evaluating VM products is a challenge in abstraction, but you should look at your predominant environment and predicted future needs before you jump in.

Product Pros Cons Bottom line

 

logo-citrix

 

 

Citrix XenServer 6.0.201
  • Easy to install

  • Greater support for industry-standard device drivers

  • No extra charge for most high-end functionality

  • Single console for all editions

  • Up to 16 vCPUs and 128GB per VM

  • Support via forums and the XenSource community.

  • A Windows application only, not a web console

  • Supported tools are not as advanced as VMware.

XenServer has the most features of any free hypervisor, is easiest to install and manage, has excellent performance and VMs support up to 16 vCPUs.

 

logo-win2008server

 

 

Microsoft Windows Server 2008 R2 SP1 Hyper-V

  • Best integration with Microsoft infrastructure

  • A strong set of enterprise features, due to be improved soon

  • Strong development focus from Microsoft.

  • Large cluster management can be more difficult

  • Only four vCPUs and 64GB of RAM per VM.

It's still not as mature as VMware or XenServer, but it has a lot of momentum. Integration in a Windows environment will make this a strong hypervisor for those running mainly Microsoft.

 

logo-vmware.jpg

 

 

VMware vSphere
ESXi 5

  • Easy to install and manage from vSphere Client

  • Many advanced features are available

  • Good support via forums

  • Many certified engineers are available in the workforce

  • Tools are available to assist in the migration to virtual.

  • Limited in terms of managing the virtual infrastructure

  • Requires upgrade to vCenter server for advanced features

  • Many advanced features are only available with additional plug-ins.

ESXi 5 is the market leader, which shows in the maturity of its product, the polish of its console and the vast number of support tools available. But it comes at a cost.

 

logo-oracle

 

 

Oracle VirtualBox 4.1.18

  • Free, open source and small 20MB file size

  • Stable with very good usability

  • Can boot from .iso and simplified file sharing

  • Runs on and hosts a very wide variety of OSes.

  • Limited USB support

  • Less refined than more established competitors

  • Not all host ports are available under the VM

  • Number of guests limited by PC host

  • Doesn't support drag and drop.

VirtualBox is an inexpensive path for an individual or SMB to explore virtualisation. If your needs extend past VirtualBox running a production server and web server on a pair of VMs on a single server, you'll probably want to use another product.
Editorial standards