The next version of Windows Server is at an early stage of development, but some of the new functionality for orchestrating virtualised compute, storage and networking resources ...
Caption by: Roger Howorth
Launched in April, VMware's vSphere 4 is the latest major release of the company's ESX server virtualization platform. Highlights of the new suite are Thin Provisioning, Data Recovery, Fault Tolerance and Distributed Power Management, all of which we'll look at in more detail in this review.
We found vSphere easy to deploy and even easier to use. But picking the version that's right for you could be tricky. The vSphere hypervisor can be purchased in four different editions (Standard, Advanced, Enterprise and Enterprise Plus); there's also a free downloadable version, called ESXi. Each version enables you to host multiple virtual machines (VMs) on a single server, so the free ESXi might be adequate for small organisations and for some test and development environments. However, the paid-for editions come with various extra features, such as the ability to pool groups of VMs together and place limits on the server resources that each pool can use.
Of all the paid-for features, the ability to move VMs from one host to another without any downtime is probably the most popular. VMware calls this feature vMotion, and it's part of the Advanced, Enterprise and Enterprise Plus editions. Equally useful, but perhaps less well known, is Storage vMotion, which allows VMs to be moved from one storage device to another without any downtime. Storage vMotion is only available in the Enterprise and Enterprise Plus versions.
In addition to vSphere, one instance of VMware's management suite, vCenter Server, is an essential requirement for many of the advanced features and must be purchased separately. The logic here is that most customers have many vSphere servers but only one, or few, vCenter Servers. So most people need to buy multiple vSphere licences and only a few vCenter server licences. Bear in mind, though that it's the vSphere licences that cover the advanced functionality, such as vMotion and Data Recovery.
If this sounds a little confusing then you're not alone, and care will be needed to ensure you purchase all the components that will be required. For example, our tests were stalled for a few weeks because we had vSphere Enterprise Plus licences but no licence for vCenter Server. Fortunately, VMware provides an online Purchase Advisor questionnaire that should help to point you in the right direction.
These complications aside, we were pleased to see the suite has evolved greatly since the last version. The most noticeable changes are several new features designed to help enterprise IT departments keep their systems running come what may, and to make the most of their resources.
The new vStorage Thin Provisioning option allows you to configure VMs to use only the amount of storage needed by their Virtual Machine Disk files (VMDKs). You set a maximum size for the VMDK in the usual way, but with thin provisioning in place, if the maximum limit is 10GB but the VM uses 4.5GB for its OS, apps and data, then the VMDK will use 4.5GB of storage on your server. The amount of server storage can grow dynamically as more data is added to the VMDK, up to the configured maximum. This is in contrast to the previous approach, where administrators defined the maximum size of the VMDK when the VM was created, and the VMDK always occupied that amount of server storage, regardless of how much data it actually contained.
We tested this by creating a new VM configured with an 8GB virtual VMDK and the Thin Provisioning option. Before booting the VM for the first time the VMDK occupied only a few bytes of disk space on the server. We watched this grow to around 800MB as we installed a copy of Ubuntu Linux in our VM. Thin provisioning is an excellent way to make the most of your existing storage, and could easily help organisations defer buying additional disks for their vSphere servers. It will also save time for administrators, who can now avoid having to manually expand the size of a VMDK that has filled up because it was created without enough headroom — a common problem before thin provisioning. Administrators can now configure a VM with plenty of VMDK headroom, safe in the knowledge that it will only use the disk space that's actually required. Unfortunately vStorage Thin Provisioning only works in one direction, so free space is not normally reclaimed from the server storage if data is deleted from a VMDK. However, in our tests we reclaimed space by zeroing partitions on our VMDK and using vSphere's Migrate option to move the VMDK to a different storage device. This could be done without any VM downtime, provided you have purchased the Storage vMotion option.
Thin provisioning is a great addition to the VMware arsenal, but top of the list of new features is probably Data Recovery, which adds comprehensive tools to backup and restore VMs while also minimising the amount of storage consumed by the backup data. In our tests we found that this feature was surprisingly quick and easy to use. We navigated to the Data Recovery tab in the vSphere Client and clicked on a VM to get the Data Recovery menu, which allowed us to run backup jobs that were already defined, and to define new jobs. Typically a backup job would include a group of related VMs that should be backed up together. We could also backup an individual VM using the Backup Now option, and could restore individual VMs in much the same way.
Restoring a backed-up virtual machine (VM) from the vSphere Client.
Having backup facilities integrated into the vSphere Client rather than needing to launch a separate program is obviously extremely convenient. But the main benefit of the Data Recovery feature is its data de-duplication (dedupe) capability. This means that if you backup a VM twice and the only difference between the first backup and the second was a few bytes defining the IP address, the second backup would use only a few bytes of storage. Better still, dedupe works across multiple VMs, so you could conceivably backup a huge number of VMs using only a very small amount of storage.
The main limitation of Data Recovery is that it works only with disk-based storage, so you'd need to use additional backup products to put your backups onto a tape library or other offline storage.
Another groundbreaking addition is the Fault Tolerance feature, which runs a cloned copy of selected VMs on a second server. The cloned copy runs in lock step with the original, so if the first server fails the clone is ready to instantly take over.
VMware Fault Tolerance works with any operating system and application, but the VM must be configured with a Thick-Eager-Zero (TEZ)-type virtual hard disk. Unlike Thin Provision hard disks, the amount of storage used by a TEZ disk is specified when it's created, and does not change dynamically depending on how much data it contains. So if you create a 10GB TEZ disk, it will always use 10GB of storage. There's a marginal performance gain from using the TEZ format compared to Thin Provisioning disks, but in most cases this is not enough to outweigh the cost benefit of using the latter. Most VMs will therefore use Thin Provision storage. However, administrators can specify which format they want when creating a VM, and can convert between the formats if they change their minds.
In our tests we took a VM running Windows Server 2008 and preconfigured with a TEZ disk, and configured it for fault tolerance by right-clicking the mouse pointer on the VM's icon in the main inventory hierarchy and selecting Turn On Fault Tolerance. It took just a few seconds to do this with the mouse, and vSphere took 45 seconds to create the clone and get it working. The VM did not need to be switched off or rebooted.
If the VM does not have a TEZ-format virtual disk it must be powered off so its VMDK can be converted. Apart from this, the same procedure is used to enable fault tolerance — just right click on the VM and select the fault tolerance option. When we tested this with a different VM, vSphere took just over three minutes to create and enable the clone, including the time needed to convert its 6GB VMDK to the required format. It seems there is a small bug in the current version that causes the process to fail if you try to create a falut-tolerant clone of a non-TEZ format VM without first powering it off. VMware told us it was aware of this.
We tested fault tolerance by opening a console onto both VMs. vSphere issued a pop-up telling us that the cloned VM screen was a read-only display, so we couldn't make any changes to the clone using the mouse or keyboard. However, the display on the clone was always identical to the source VM — if we moved the mouse pointer on the source, the mouse pointer also moved on the cloned VM's display. vSphere also provides an option to test the clone by actually failing over to it.
In summary Fault Tolerance is extremely easy to use – much more so than achieving similar results using Windows Server Clustering Services. vSphere Fault Tolerance has the added benefit of being completely OS independent, so you can make any operating system and application fault tolerant. Also, it does not require you to purchase additional licenses for the clone VM. Fault Tolerance and Data Recovery both come in the top three vSphere versions (Advanced, Enterprise and Enterprise Plus).
Distributed power management
Another notable addition to vSphere is Distributed Power Management (DPM), which monitors your vSphere environment and constantly tries to minimise the number of vSphere servers needed to host your VMs. Whenever possible it will use vMotion to migrate VMs onto the smallest number of vSphere hosts and then switch off unused ones. For example, you might have five vSphere servers hosting 50 VMs, each running a Windows XP desktop for a remote worker. During the day the 50 VMs all use a fair amount of server resources, so DPM spreads them over five vSphere servers. But at night or during the weekend, when nobody is working, the VMs sit idle, so DPM will move them all onto one server and switch off the other four. The VMs are kept running at all times, so users can connect whenever they want. Again, this feature works regardless of the VM operating system and applications, but in this case the vSphere servers need either a Wake-On LAN capability or a Lights-Out Management card so they can be switched on automatically when needed. DPM is only available in the two higher-end vSphere versions (Enterprise and Enterprise Plus).
Usability, licensing & performance
In our lab tests, we found VMware's new virtualisation suite easier to install and use than the previous version. Such improvements might seem cosmetic, but if they help to increase productivity then they also help you get the most from your existing resources.
As far as licensing is concerned, vSphere 4 is licensed per processor, with each 'processor' being defined as a populated CPU socket with up to six physical cores in the case of the Standard and Enterprise versions, or 12 cores for the Advanced and Enterprise Plus editions. According to VMware's online store, our test installation of vSphere 4 Enterprise Plus would cost £3,345 (ex. VAT) with one year's Platinum (24x7) support. To that, you'll have to add another £4,795 (ex. VAT) for vCenter Server 4 Standard (including Orchestrator and Linked Mode) with the same support deal. Grand total: a hefty £8,140 (ex. VAT). You could reduce the cost somewhat by opting for a cheaper Gold support plan.
Although we weren't able to verify this in our tests, VMware says network and storage performance have been improved by 300 percent compared to the previous version of ESX Server. You can now expect a VM to be able to shift 30GB/s over a network, and 300,000 I/O operations per second (IOPS) on a SAN — providing your LAN and SAN kit are up to handling the load.
vSphere also includes a range of other performance improvements and boosted features. For example, the maximum number of virtual CPUs that can be assigned to a VM has doubled to 8 (only available in Enterprise Plus), while the maximum amount of RAM that can be allocated to a VM has gone up fourfold to 255GB (all vSphere versions). All versions of vSphere servers can now be fitted with up to 64 logical CPUs and 1TB of RAM, and each vShpere server can handle 256 powered-on VMs. VMware has also tweaked the RAM Over Commit feature. For example, the sum of the RAM of all VMs running on a server with 8GB RAM can now be 16GB. In fact, there's a huge list of impressive limits and features of this nature, and it may be worth comparing VMware's list to the competition if you're planning a large-scale implementation.
Caption by: Roger Howorth