Virtual machine density can be a virtualization deal breaker

When you decide to transition to a virtual infrastructure, you should focus on one particular aspect of that move: Virtual machine density. Believe it or not, it could be a deal breaker.
Written by Ken Hess, Contributor

Virtual machine density refers to how many virtual machines your virtual infrastructure host servers can maintain, while still performing well themselves and also providing enough compute resources for every virtual machine to perform well. And you might not believe me, but there are no single right answers for this elusive and magical number. Virtual machine density depends largely on factors other than virtualization software vendor, storage type and speed, network speed, server hardware, workload type, and workload diversity. Vendors can boast huge consolidation ratios and high virtual machine densities, but the proof, as they say, is in the pudding. The pudding here being the variables I've listed above.

The purpose of this advisory article is to educate you on virtual machine density. No vendor can say with any accuracy that their solution will net you a certain conversion rate for physical to virtual systems without knowing some vital capacity information. Although I often hear eight to one, 10 to one, 20 to one, or more to one, there's no method to that madness.

Now keep in mind that I'm talking about server virtualization in this post. There are vendors who can tell you within a few virtual machines for desktop conversions. There's not as much diversity in desktops as there is in servers. Most desktop users use a web browser, a word processor, a spreadsheet, an email program, and very little else during the course of a normal workday. There are exceptions, of course, but for the average user, vendors can make pretty good guesses.

Servers are a different story.


External storage arrays, storage area networks, network-attached storage, local storage, and all the buzz terms referring to storage can make one's head spin faster than the platters on a 15K SAS drive. My best advice for virtual infrastructure storage is to buy the best you can afford, but split your storage into "tiers". Tiered storage means that you use your fastest disks for disk I/O intensive operations. We'll call this Tier 1.

Workloads that do not require high disk I/O, you place on Tier 2 storage, which is less expensive disk arrays that aren't leading edge SSDs. You can use standard architecture spinning disks for these.

For test, development, monitoring, and utility type workloads, you can use Tier 3 storage, which is very inexpensive spinning disks. And finally, Tier 4 storage is for those virtual tape libraries, ISO repositories, YUM/RPM repositories, and similar long-term storage that doesn't require high I/O speeds, or RAID 6 or RAID 10 fault tolerance.

For very intense disk I/O workloads, such as databases, you probably want to consider local storage for best performance.

Advisory: Tier your storage according to your workload needs. One size does not fit all in storage. Storage will be your largest expenditure. Buy at least twice as much as you think you will ever need, and then be prepared to expand that in less than two years.


Network speed and capacity concern application developers and administrators when converting physical servers to virtual ones. The issue is that now all of your virtual machines will share network bandwidth with their host systems. This can be a problem for those network bandwidth-hungry applications and workloads if you don't plan ahead.

Host systems allow you to "team" physical NICs into a larger, single NIC. Blade enclosures also allow NIC teaming to pass more network traffic. NIC teaming also allows you to efficiently run multiple VLANs on your hosts. Virtual machine pools often require multiple VLANs, so the host must be able to support them. It's common practice for administrators to segregate network traffic by type for virtual machines using VLANs.

Advisory: Measure your network bandwidth requirements prior to converting physical systems to virtual ones. You don't want to "choke" applications that have high bandwidth needs. A bit of capacity measurement up front might mean that some applications have to remain on physical systems.

Server hardware

Your chosen server hardware can have a profound effect on virtual machine density. Again, a good capacity and performance chart will give you a good idea of the CPU and memory capacity that you'll need to accommodate those converted physical machines.

Remember that your dual core, 16GB RAM physical machine probably doesn't need that much capacity as a virtual system. The performance numbers will tell you what your actual utilization is, and then you can judge from that how much resources to dedicate to its virtual replacement.

The rule of thumb is to purchase more capacity than you need, or at least allow for easy expansion of your current capacity. Server hardware is very powerful these days, but virtual machine sprawl and over building are your worst enemies. The worst possible scenario is to take your underutilized physical server farm and turn it into an underutilized virtual server farm.

Advisory: Buy as much capacity as you can afford. Allow for intelligent growth and expansion. Watch sprawl. Keep a hot spare around for failures.

Workload type

The type of workload you have to convert affects density because some workloads burn capacity quicker than others. And they can burn different types of capacity. For example, some workloads use a lot of memory, while only nibbling at CPU cycles. Many vendors have affinity rules that you can tweak to separate certain virtual machines from one another to alleviate bottlenecks associated with combing too many of the same type of workload onto a single host.

You'd be surprised how little thought goes into placing virtual machines, separating them, and keeping them balanced. Administrators think that allowing the vendor's balancing algorithms to keep workloads segregated will work. It won't. You have to study the type of workload from each of your transitioning systems, and determine how its use of available capacity will affect other virtual machines in the cluster.

Advisory: Relieve compute bottlenecks by watching what type of workloads you're deploying. Apply affinity rules. Cap memory and CPU usages where appropriate for virtual machines.

Workload Diversity

Workload diversity will help increase your virtual machine density. You want to consider the workload type every time you deploy a new virtual machine to a cluster host. For example, you would not want to deploy several disk I/O intensive virtual machines to a single host in addition to other workloads. You'd want to spread those database virtual machines to other hosts, and maintain a high level of diversity on each host.

For example, hosting a database system with web server virtual machines, application server virtual machines, and network service virtual machines is workload diversity.

There is also a need to do this same type of diversification on your storage arrays. You want to diversify your storage workloads as well. Those disk I/O intensive database virtual machines should not exist on the same arrays with each other. You have to diversify all of your capacities: CPU, memory, network, and disk.

Advisory: Spread your workload types among your hosts and on your storage arrays. Remember that those disk I/O intensive workloads might be better served on local SSDs than on network-attached storage.

Server consolidation and physical to virtual conversion requires more than intuition or marketing conjecture, they require hard numbers from capacity and performance data. They also require some thought as environments grow. You can't just randomly place virtual machines anywhere you want in a cluster and expect that the software balancing will take care of your needs. I've given you some areas to focus on for your own server conversion and consolidation efforts. Please keep me posted on your progress and what kinds of virtual machine density numbers you successfully manage to score.

Editorial standards