Here's my dilemma: I have twelve Dell 2950 servers of various disk and memory configurations that I want to use for virtual hosts in my lab. I can't decide which hypervisor to use for the lab. Partly because I'm not the only one who's going to use it. If I were, the answer might be different, but I have others to worry about when I make the choice.
My requirements for a lab environment are pretty simple really. I need something that's compatible with my Dell 2950s, something that's easy to use for the other users (creating and working with VMs), and something that's free or that I can obtain licensing for through a volume discount, for free because of related product-focused articles and reviews that I can create from it, because the licensing is open source, or because it has an extended trial license.
Hypervisors: decisions, decisions
- Citrix XenServer
- Microsoft Hyper-V
- VMware ESXi
I've used XenServer and I liked it. Its interface is simple and straightforward. The Xen hypervisor has excellent performance and deploying a new VM from a template is a lightning fast operation. Windows guests and Linux guests perform equally well (in my experience) on Xen and I have no complaints with it at all. The only thing that I've never been completely successful with when using XenServer is performing a physical to virtual (P2V) migration. It did not go well at all. It cost me a lot of time and some credibility since it was a production system that I was performing the P2V on. Bad move.
To be completely transparent, the product I used, at the time for the P2V, was a third party migration tool that was an epic fail. Even tech support couldn't help. That was 2007 and yes, I do hold a grudge. I've never used that tool again and won't. Sorry.
It had nothing to do with XenServer or Citrix. Their tech support wasn't involved at all, so no fault to Citrix on that one. Back then, Citrix didn't have its own tool for this. Now it does. The problem with XenServer for this lab is that no one is familiar with it and I'm afraid to go off on the tangent of using something foreign to everyone. None of my other users have ever touched XenServer and it would be a constant help desk situation for me to use it.
Microsoft's Hyper-V is a good virtualization platform. Windows Server 2012 feels lighter and is less intrusive as a hypervisor substrate operating system than in the early days of Hyper-V. I really like 2012's snappy response and overall I'd say that it's a good choice, but unfortunately not in this case. This lab will mostly house Linux VMs and while Hyper-V is an excellent platform for Windows guests, I'm not 100 percent sold on its Linux support. I know that statement will have a lot of rocks thrown at it, but performance here is very important to the users and I can't take any chances.
Plus, most of them are anti-Windows people who'd rant at me endlessly about using a Microsoft product. So, yes, indeed I'm bending to popular consensus. Sorry, Hyper-V, I'll use you for some of my personal projects.
I love Proxmox. I love it because it's open source and uses great open source tools for working with virtual machines. It uses KVM and QEMU for what I call "traditional" virtualization and for Windows VMs it's the only way you can go. But Proxmox also gives you the capability to deploy OpenVZ containers for your Linux VMs. OpenVZ containers are an extremely efficient way to create VMs--efficient because you can deploy hundreds (seriously) of VMs onto a single host. OpenVZ containers treat VMs like BSD jails and runs them as separate applications. But what you get is a system that delivers workloads at a very high density and with very little corresponding overhead.
I've called Proxmox the "ultimate hypervisor" and I mean it. If you have no political or religious biases in favor of Citrix, Microsoft, or VMware; you should give it a try.
I like Proxmox so much that I had actually started writing a book on it. I had to abandon it for various reasons, but I'm still very much a Proxmox fan.
However, and it pains me to say it, but I can't use Proxmox for this lab. I personally think it would work, but some features that we require just aren't there. In the next section, I explain in more detail.
If you guessed that I'm going to use ESXi, you guessed correctly. If you also guessed that I'm less than passionate about the choice, you're also correct. Don't get me wrong, I still like VMware. I use it a lot. And I've used it since its first beta product came out in 1999. I'm not passionate about it because I really liked ESX. I'm not such a fan of ESXi. I need that Domain0, that operating system to work with, and that feeling that there's something more than a running kernel propping up my critical workloads. Maybe it's just me, but it feels a little thin.
I had the same issue with Tivoli Enterprise Monitoring. The heavy client was the best thing ever. Endpoints, not so much. I know it's the opposite way that I'm supposed to think, but in my opinion the heavier operating system/hypervisor was just better.
I'm also not such a fan of the virtual machine vCenter Server. I liked having them separated for various reasons that I don't want to go into here, but realize that there are significant advantages to having that separate server that's physically and logically unconnected from the ESX environment.
I'm compelled to use ESXi because that's what everyone is familiar with who's going to use that lab environment. The other thing is that, since I'm using ten-year old systems, I have to use ESXi 4.1, which doesn't please me. It also won't please the other people because they want the latest and greatest. However, it's a lab and we have to make do with what we have. I also need to use VMware because of its features such as DRS and HA. Those are absolute requirements. I know it sounds odd that I'd need those features for a lab, but we can't go rebuilding VMs or hosts every few days. This is a working lab that requires more stability than normal.
They're so important to myself and the other users that you might say that I have my own personal service level agreement to keep these systems up and running at a greater than 95 percent level. High for a lab, but that's where we are.
In the first part of the lab setup, I have five systems that I need to get up and running and into a cluster of their own. So far, I've setup two of the five. The trouble with older systems is that you never know what you're getting, so I'm struggling with the others right now.
And before you make suggestions to resolve my dilemma, I can't spend any real money to make anything work. I have to be creative. It's a good thing that's kind of my specialty--to do something with nothing. Thank goodness for those solid old Dell 2950 servers and large decommissioning projects. It's not the perfect lab environment for me, but then I'm only one user. What is it that Spock and Kirk said about this, "The needs of the many outweigh the needs of the few". That's 23rd Century logic for you.
So, what do you think of my dilemma? What would you do? Do you think it's reasonable that I'm less than thrilled with my somewhat forced choice? Talk back and let me know.