Virtualisation is one of the most exciting areas in IT today. Allowing IT managers to run multiple operating systems on a single piece of hardware should mean lower costs, fewer physical servers and better system reliability. However, it's an area in which Microsoft has lagged behind, but now the company is hoping to challenge VMware, XenSource and others with the impending launch of its long-awaited hypervisor product some time next year.
Jeff Price, the man responsible for managing Windows Server, storage and virtualisation products, talked up the company's efforts when speaking to ZDNet UK recently. But will those efforts be enough to convince customers?
As Price admits, when it comes to virtualisation, Microsoft is still some way off the finished products, despite working very quickly to do a lot in a short time.
Q: What is the end-game with virtualisation at Microsoft?
A: We think a lot of IT pros will increasingly break up their server functionality into packets, to let them do things like machine virtualisation and application virtualisation. Running applications on virtualised servers means you may never deploy a physical server. Once you have done that and are running a high-performance server, when the load becomes too great you can take a couple of applications and move them to machine two.
Already in Microsoft, the default point is that if you ask for a new server, you are going to get a VM, a virtual machine. You have to make the case why it shouldn't be. Now you just have to imagine that change. How quickly you can provision? It used to take about two weeks to get a new machine provisioned, now it's 20 minutes.
Now think what people would do with new technology if the friction of consuming it was reduced. The sky's the limit.
Do you have this technology up and running?
The hypervisor is with our internal teams now.
Are you happy with the speed of progress on the hypervisor?
It's been pretty remarkable, the progress they have made in the amount of time they have had. We have some incredibly bright people working on that. We are optimistic that the Longhorn Server [due to ship next year] and the hypervisor will be as close together as we can make them — certainly no more than 180 days.
The way we are developing this is that Longhorn is largely architected to use the hypervisor but, because this is our first effort at doing the hypervisor, we want to make sure that we can still deliver Longhorn even if the hypervisor isn't quite ready yet. So it can come later and plug right in.
Do customers really want this?
You bet. Customers and partners want this and System Centre Virtual Machine Manager (VMM). With that you can manage today's virtual server and Longhorn. VMM will be available in first half of next year as well.
What about virtualisation with other environments?
We've got a bunch of other investments. We have a working relationship with XenSource to work on the Xen hypervisor for Linux so any Linux distribution that is out there will run well on our hypervisor. We have had Suse Linux running alongside Longhorn, both on VMs, and we think it will be a popular choice.
Do you see other environments coming into that, perhaps other distributions of Linux?
Yes, I do. Customers will pressure their vendors to get them to come onboard. Two-thirds of servers are Windows and another one third are a mix of Linux, Unix and various other things. We've seen a lot of vendors jump on the bandwagon of web services, too.
There are a huge number of dimensions to interoperability, after all, including the applications layer, the management layer and the networking.
So it is customers who are driving this?
I think it is the maturing of the industry. And the customers are saying, "Don't make me do the integration, don't make me figure out the IP licensing. You take care of that". And those are the managed products that people want to buy, from the people who have done that work. I think you will see the ease of interoperability just continually go up. It's going to be protocol by protocol, application by application.
You appear to sell your applications, such as CRM, completely separately from the server business. How do you link them together?
From the server side we are a horizontal platform. The SQL guys are trying to build the database business but people want to run Oracle on Windows and, from our point of view, we say great. But at the same time we have people selling SQL Server, and that just reflects the diversity of businesses that we are in. We have all those efforts, and now with the Linux business it goes all the way down to the operating systems.
We have already worked out what we call Vienna, which is the next major operating system release we will work on. It's vague at this point. It's mostly research. We have a product planning team that spends a lot of time looking at trends, customer feedback, scenarios we want to work on. At some point that will coalesce into a set of specifications we will work on.
In the middle of that we will do an update release to Longhorn server, probably in about two years, and that is kind of opportunistic, looking at what kind of features we want to add to the code base.
Predicting the future is never wise, but what features would you have in Vienna?
There will be continued focus on [the question]: where does virtualisation take us? What are the agile IT environments that you can create with this dynamic? More and more customers are going to have a mix of self-hosted and hosted applications. But what does that mean? What does that mean for things like identity management? What does it mean in terms of the programming platforms for things like the Windows Workflow Foundation, the Windows Communications Foundation? How do they work in a world where I don't own or control the network that all my servers are on? What does that mean for compliance?
Some of these problems are just incredibly interesting.