X
Business

Six factors to weigh up for virtualisation projects

Diving into virtualisation without preparing properly is risky, so here is a checklist to help projects go smoothly
Written by Cath Everett, Contributor

Despite the recession and the general pressure on IT budgets, the virtualisation sector has not been too badly hit. Organisations are still buying into the idea of cutting costs by reducing utility bills, improving the use of resources and freeing expensive datacentre space.

Gartner forecast earlier this year that global sales of virtualisation software would increase by 43 percent to $2.7bn (£1.7bn), with penetration levels rising from 12 percent in 2008 to 20 percent by the end of the year.

The research firm's definition of virtualisation includes infrastructure software, management tools and hosted virtual desktops, which should triple in value to $298.6m by the end of the year. Infrastructure revenues are expected to grow 22.5 percent to $1.1bn, while management tool sales will jump 42 percent to $1.3bn.

Ad hoc deployments
But as part of that shift to virtualisation, too many companies are still not planning implementations in a structured, strategic way. Instead, deployments are often ad hoc, which ends up creating problems.

Kevin Green, infrastructure solutions manager at IT services provider Trustmarque Solutions, says virtualisation is a double-edged sword. "If you get it wrong, the headaches can outweigh the benefits. You have to look at the whole infrastructure, not just one or two elements, because in a virtualised environment, everything has a knock-on effect on everything else," he says.

That interdependency means it is crucial to plan implementations carefully, not least because the upfront costs of going down this route can be high.

Here are six essential considerations that need to be weighed up before embarking on a virtualisation project.

Factor 1: The business context

virtfactors2financial.jpg

In difficult economic times, financial directors are unlikely to sign off costly deployments unless they are presented with a clear case for return on investment.

A common way of justifying virtualisation projects relates to reducing power and cooling costs, although it is not the only option.

To understand how much energy the IT department is consuming, it is generally necessary to establish exactly what equipment is in situ and approximately how much electricity each component draws from the grid.

The next step is to consult with facilities management colleagues about the cost per unit of electricity, before estimating how much power could be saved by consolidating and virtualising the server estate.

The before-and-after price differential should add up to a business case.

This is not the only consideration in the business context: another relates to how pools of virtualised resources, rather than individual boxes, will be paid for by business units. In the past, departmental heads often funded the purchase of their own servers to run applications.

Loss of control
The move to a virtualised world could cause politics to rear its ugly head. Some managers may not like the loss of control of their own physical assets, while others may be unhappy at having to share resources with other units.

As a result, it may be necessary to sit them down to thrash out the issues. Such matters may include ensuring the finance department has enough capacity at the end of each quarter to process the company's accounts, or establishing whether to introduce charge-back mechanisms or finance IT services via a blanket fund.

Roy Illsley, a senior analyst at the Butler Group, says few people are using chargeback because their organisations lack the maturity to cope with such a system. "So they're coming up with fudges. It's like going to a restaurant. If you know people well, you'll probably split the bill six ways, but if you don't, you'll all probably want to pay individually," he said.

Factor 2: Infrastructure considerations

virtfactors3infrasructure.jpg

The first thing to think about in terms of infrastructure is whether the organisation's servers are capable of accommodating virtualisation or whether they will need to be replaced by newer machines.

The x86 processors of servers that are more than three or four years old are unlikely to support either Intel or AMD's virtualisation extensions, which enable hypervisors to run unmodified versions of host operating systems without incurring significant performance overheads.

Not all of Intel's or AMD's processors support such technology, either: a factor that should be born in mind during any new procurement process. Older machines may also experience I/O bottlenecks, which can again negatively affect performance.

Moreover, if there is a desire to introduce functionality, such as with VMware's VMotion, which enables the migration of workloads between different hosts for high availability purposes, servers must run on chipsets from the same manufacturer or the technology will not work.

Impact on bandwidth
Another thing to think about is the network. Most virtualisation projects involve centralising IT infrastructure back into the datacentre, but such a move can have an impact on bandwidth, particularly at the WAN level if remote offices are involved.

Adrian Polley, chief executive of IT services provider Plan-Net, says virtualisation can represent change from a user point of view. "If they're used to accessing applications using a gigabit-Ethernet network and end up with a 100Mbps broadband connection, they'll notice the difference and won't be happy, so you have to plan how you're going to deal with that," he says.

One possibility is to introduce WAN acceleration technology to speed data transfer, while another might be to introduce thin-client machines, which are designed to run over low-bandwidth networks.

"When you design systems, you have to understand the possible impact, because you can end up having to change the infrastructure completely," says Polley.

Factor 3: Storage implications

virtfactors4storage.jpg

The biggest single concern when moving to a virtualisation infrastructure involves storage.

While organisations may be able to get away with continuing to use direct attached storage for small development and test deployments, the larger a virtualised production environment grows, the less sustainable the approach becomes.

This situation is particularly true for those companies wanting to deploy tools such as VMotion, which enables the migration of workloads between different hosts for high-availability purposes.

Instead, a move to shared storage may be required, with storage-attached network (SAN) technology being the most popular in this context. Such technology is expensive but important, because virtual machines (VMs) are stored as disk images in the SAN.

In addition, every physical server on the network must be able to see each VM disk image to know when and where to allocate spare processing capacity should a problem occur with a given host, or should it need to be taken down for maintenance purposes.

Paul Mew, technical director of IT services provider Ramsac, says if an organisation consolidates servers down from, say, 10 physical boxes to two, each running five VMs, and then experiences a hardware failure, they may find themselves "in an increased risk position".

"The issue is that, because you now have your eggs in two baskets rather than 10, you'll have to try and recover half of your infrastructure all at once, rather than just a single server," he says.

Another problem is that if an IT department has begun virtualising its environment but has not factored in the cost of moving to or upgrading shared storage, then it can be a tricky to subsequently go back to the finance director and ask for more money due to lack of foresight.

But when purchasing a SAN, it is also important to check virtualisation software providers' hardware compatibility lists. Not all components of storage vendors' equipment are accredited to work in a virtualised environment, which may cause problems later.

Performance load
It is just as important to think about sizing the storage infrastructure in relation to performance load. After virtualising their servers, some companies find applications run more slowly. This loss of speed is because the physical disks in a SAN can only process a given number of I/O requests per second, but VMs tend to generate so many that the disks cannot always keep up.

Workload analysis and planning tools, such as Novell's Platespin, can be used to get around this problem by creating a usage profile of how existing physical servers use memory, disks, processors and network bandwidth, and evaluating what capacity is likely to be required in a virtualised environment.

But such planning is also crucial in terms of storage volumes — or areas in the SAN in which data is stored. If multiple VMs running heavy workloads are all attempting to access the same volume, performance will inevitably be affected. Therefore, it is necessary that each volume is accessed by a mix of VMs with heavy and light workloads to ensure a balance.

Factor 4: Back-up and recovery

virtfactors5recovery.jpg

Because of the high-availability capabilities of tools such as VMware's VMotion, which enables the migration of workloads between different hosts, there is a danger that backup needs may be underestimated.

In a physical environment, software agents are generally installed on server operating systems to back up applications and data to disk or tape. But in a virtual world, VMs are complete logical environments that include an operating system, applications and data.

While initially most organisations simply continue to install backup agents on each of their VMs, only the applications and data are backed up — not the operating system. That means if the VM goes down, it may be necessary to rebuild the operating system from scratch before it is possible to restore the entire system.

A further concern relates to backing up multiple VMs to the same storage volume — or area in the SAN in which data is stored — at the same time. If this is allowed to take place, the system can end up overwriting one VM with another or putting it in the wrong place, which can lead to administration headaches.

Yet another challenge involves resource contention. Because high levels of server processing power are required to undertake backup activities, if unused resources are not made available at this time, performance may be negatively affected.

Ramsac's Paul Mew says many people back up their virtualised environment in the same way as their physical one. "But there's definite scope to simplify and reduce management overheads," he adds.

Specialised backup tools
As a result, once deployments have had time to settle in, some organisations begin looking at specialised virtualisation backup tools to address these challenges. Such software makes it possible to clone and restore the entire VM instance or alternatively upload data snapshots to newly created clones, which saves time.

What these tools do not offer is granular, application-specific recovery, which means staff may need to recover an entire VM instance even if only one file is lost or corrupted. Moreover, because each VM is backed up in its entirety, even with snapshotting enabled, it is likely to require more disk capacity at the SAN level.

However, de-duplication technology can be useful in this context and is included as standard in some SANs. De-duplication tools ensure that only information that has been altered or added to in some way is backed up.

Factor 5: Licensing and application support

virtfactors6help.jpg

Although most applications run on virtualised servers, support levels can vary, so it is best to check with the vendor concerned. If it has certified its applications as virtualisation-ready, full support will be forthcoming.

However, some suppliers only guarantee best-effort services. So, if they believe a problem is likely to have occurred because of the virtualised environment, they may ask for it to be reproduced on a physical server, which is time-consuming and puts the onus on users to define the error.

Other vendors provide no support, either due to lack of testing or because of known issues when their applications run in a virtual world. In this scenario, it is necessary to weigh up the risks of migration and evaluate whether there are sufficient skills in-house to cope with future challenges.

Suitable candidates
But another thing to think about is which applications will make suitable candidates for migration — or not, as the case may be. The knock-on effects on infrastructure of high-throughput, network- and storage-intensive applications — such as heavily loaded databases, for example — may make them unsuitable unless organisations have money to throw at the issue in terms of upgrades.

Licensing is yet another pitfall. While some suppliers offer the equivalent of site licensing, others charge for the number of physical processors found in the primary host — whether the virtual machine will use them or not. Still others license their applications based on the number of servers acting as a pooled resource.

As Plan-Net's Adrian Polley says: "You really need to check it out, because you can effectively end up with a liability that you never knew you had."

Factor 6: Internal management repercussions

virtfactors7internalmanagment.jpg

One of the key repercussions on the IT department of a migration to x86 server virtualisation is that sharing resources affects other elements of the infrastructure.

Roy Illsley of the Butler Group says virtualisation cuts across servers, storage and networking. "So you can end up in a situation where internal support is duplicated, or there are gaps because people can't agree on who is responsible for what."

That situation presents a particular challenge for large enterprises, where staff tend to specialise in certain areas because, with virtualisation, all these areas start to overlap. Therefore, it is crucial both to demarcate responsibilities clearly and to reassure people that their jobs will not be axed — although they might change — as part of the move to a new world.

Plan-Net's Adrian Polley says people will end up needing a greater breadth of knowledge because their area of specialisation is likely to become just a smaller element of a virtualised whole. "People really need to be expert in a bunch of areas, so the skills issue is something that's hard to overstate," he says.

Knowledge transfer
While training or knowledge transfer from service providers and consultants will be required for those personnel involved in the deployment, it is also important not to forget those affected by "second-line impacts", Polley adds.

But one way of taking some of the pressure off harassed personnel is to introduce specialised management tools from suppliers such as Vizioncore and Veeam. Unfortunately, too many organisations fail to budget for such products or simply find themselves bamboozled by the wealth of point software available.

The absence of a broad-based management suite for virtualisation also generates management issues, but such applications can help in trouble-shooting, particularly as the functionality provided by hypervisor vendors remains limited.

As Polley concludes: "All this sharing means that if you give resources to one thing, you have to take it away from something else. So, it's always a balancing act or you'll end up with bottlenecks all over the place."

Editorial standards