X
Tech

What Microsoft understands about virtualisation (and a problem for VMware)

When VMware’s main competition was a little company called Connectix (100 people to VMware’s 200), the two virtualisation teams were best enemies; staring each other down, going head to head and planning to crush the opposition and own the market.
Written by Simon Bisson, Contributor and  Mary Branscombe, Contributor

When VMware’s main competition was a little company called Connectix (100 people to VMware’s 200), the two virtualisation teams were best enemies; staring each other down, going head to head and planning to crush the opposition and own the market.

When Microsoft bought Connectix and worked through the tortuous internal steps to get to a sensible virtualisation plan, switching from head-on competition to telling VMware what they were doing so everyone could plan ahead on the management tools front was a bit of a shock; when we caught up with him in Redmond recently Ben Armstrong (the Virtual PC Guy) told us he basically had to pinch himself at the first meeting.

Why did it make sense for Microsoft to do this, above and beyond being so big that it has to bend over backwards to be (and be seen to be) fair? It’s because to Microsoft virtualisation isn’t a product; it’s a feature. Just as Windows phones are Windows phones because they’re part of the overall Windows platform, virtualisation is a Windows feature more than it’s a platform in its own right. (This approach has some unexpected benefits, as Ben recently found; if you plug a UPC into a Windows Server but don’t configure the USB/serial port failover software, your VMs are still protected. The server notices it’s running on a machine with a battery and uses the default power management settings; when the UPS battery starts running low, it will shut itself down gracefully – and by default the Hyper-V virtual machines will be saved, ready to restart automatically when you power the server back up.)

I've been saying for a while that most virtualisation projects currently are strategic; I should probably say tactical because I mean that they’re specific projects virtualising a specific thing for a specific benefit. Virtualisation is a deliberate tactic or strategy for that project rather than the default approach. When the management tools are sufficiently mature and migration doesn’t mean choosing between buying identical hardware to migrate to or reducing your VMs to the lowest common denominator so you can migrate across a wider range of hardware, then we’ll all install everything virtualised; until then you’ll have to have a reason.

Ben Armstrong agrees but he calls it something different; he’d say that virtualisation technology is a commodity. Companies pick and choose between VMware and Hyper-V on a per-project basis; just because they used VMware for the last virtualised server doesn’t mean they won’t pick Hyper-V this time if it’s a better fit. Just as they might buy Dell servers this year to go with last year’s HP kit.

That’s not a problem for Microsoft, or at least not as much of a problem as it is to VMware. If you put in VMware, you might go and put Windows Server or SQL Server or Exchange or BizTalk on top of it, and you might choose System Centre to manage it alongside your physical servers. If you choose Hyper-V (free or with Windows Server), VMware has nothing else to sell you. That’s why the rumoured VMware Linux distribution (if it’s not a red herring caused by VMware hiring more folks to work on Linux-based ESX) would make some sense (for VMware that is); you can run Linux on Hyper-V, so you might pick VMware Linux and pay for VMware Linux support. The question is: would you?

- Mary

Editorial standards