Tech events that are shaping Microsoft's future

Microsoft's Bob Kelly talks about a remarkable quarter: one that has certainly been eventful, though not always for the right reasons
Written by Toby Wolpe, Contributor

Bob Kelly is a Microsoft executive with a wide remit that takes in infrastructure products ranging from virtualisation to security. He talks to ZDNet UK about some of the major events that have marked the autumn period for the software giant.

The final quarter of 2009 is proving a busy time for Microsoft. Some events, such as the launch of the Windows 7 operating system and the release of Windows Server 2008 R2 with Hyper-V R2, have been the result of careful planning. Other events, such as the Sidekick data loss, have not.

ZDNet UK took the opportunity to quiz Microsoft corporate vice president Bob Kelly about some of these developments when he passed through London recently.

Kelly is responsible for Microsoft infrastructure server products, including business online services, virtualisation, Windows Server, identity and security, and high-performance computing.

Q: With the appearance in October of Release 2 of the Hyper-V virtualisation system, Microsoft appears to have made up ground on VMware. Are there measures Microsoft will be employing to take the lead, or are you happy with where you are?
A: Virtualisation in itself is less interesting than what it enables. But that said, if you look at [analyst firm] IDC figures, you'll find we have about 20 percent of the hypervisor market and we'll double that in the next 12 to 18 months. We expect very rapid adoption of Hyper-V, particularly if the customer is virtualising Windows, which is the vast majority of the market.

But the reality of IT is the rich margins you see around a niche technology as it moves to mainstream will start to erode, not because we are going after the margins or anything of the sort, but because the natural course of adoption means you can't get rich margins on technology that goes mainstream — that is just the nature of the beast. That will pose certain challenges to the [virtualisation] economic model that has been established over the past two or three years.

Despite all the hubbub and excitement about virtualisation, it's still quite early. Less than 20 percent of all the servers in use today are used as hosts for virtualisation. The reality is that there are 31 million X86 servers installed on the planet today, so it's going to be physical and virtual world for a long time.

You say it's early days for virtualisation, so do you think organisations are making the most of the technology and approaching it in the right way?
I think there really will be a change in approach in the way organisations use virtualisation. The reason they started to adopt server virtualisation in the first place was because they had server sprawl. They had one server per app and one for DNS and so on, and there was tremendous under-utilisation of those servers.

That was problem one: they were spending too much on the hardware and not getting a return. So they start to introduce virtualisation on the server side because it helps them reduce their capital expenditure. The corollary challenge for customers now is that they have virtual machine sprawl.

They have this set of virtual machines that has spun out of control, so all it's done is surface a different management problem. It's in the nature of these things that you see a niche technology pop up that's interesting. You have first-mover advantage for VMware, for the X86 world at least.

But as technologies normalise their way out, they become part of the platform and then vendors that can give customers a way of taking advantage of this technology more broadly — not only on the server but across other pieces of the infrastructure — that's when it becomes really valuable to a customer.

Virtualisation becomes an important enabler not just of reducing the cost of IT on-premise, but also because it can enable a whole new set of scenarios in the cloud. The technology that spans the on-premise and the off-premise, or public and private cloud, is virtualisation. That enabler becomes really important as we move to a software-plus-services world.

Do you think changes such as the cloud and virtualisation will fundamentally change Microsoft's business model and the way it makes its money?
Sure. As we talk about a software-plus-services world, the economics of cloud-based services are very different. The top-line revenue position will grow pretty dramatically, but profitability or Cogs [cost of goods sold] implications of a services-based world are very different.

So our aspiration is of course to grow both. If we can deliver a better value proposition to customers where we run some of their IT at a lower cost to them but at a higher dollar figure to us, then over time it will be a very profitable business for the company. The truth is also that in a services sense we are also still very nascent.

But our approach with BPOS [Microsoft's Business Productivity Online Suite hosted messaging and collaboration tools] contrasts with, say, the Google...

...model. The Google model is: 'Partner, you buy 12 months of the service upfront and then you resell that at whatever price you want'. So you get 20 percent discount off the list upfront but you own the asset. It's a resell model.

Our model is not a resell model. It is much more like an agency model where the partner gets an ongoing annuity stream from the moment they transact and they don't take any goods on their books.

So our model is friendly to partners and our strategy is to use whatever capabilities we need across virtualisation, across the services stack, to enable that transition to software-plus-services. We can deliver a lower-cost product to the customer and at a higher revenue to the company.

How will the balance change between a cloud world and a world based on software revenue and licensing?
We will continue to see both. Our strategy is software plus services. So we always intend to sell software to our customers. Some customers will be almost exclusively cloud-based. Those will tend to be the vast majority of the very long tail of small businesses.

Then the larger customers will make a decision workload by workload: should this application be run on-premise or off-premise? Can I consume it as a service or do I need to deliver the service myself? Messaging is a canonical example. More customers are very comfortable with the notion of consuming messaging as a service. Line-of-business apps tend to be much more on-premise.

We can't predict what the world will look like in one year, two years or five years. But we need to be the platform no matter which way it goes. That has always been our strategy.

Are there any changes in the offing to reduce the complexity of Microsoft licensing?
We always look at licensing to make it as simple as possible for customers to consume our software. For example, we recently announced two addendums to our enterprise agreements for customers, to make it easier for them to consume our applications and infrastructure.

Now it's very simple for a customer who wants to manage physical and virtual systems through a single pane of glass on Windows. They now have one model they can move to. We always try to simplify. It doesn't mean it's simple, but we try to simplify.

In October, T-Mobile's US Sidekick service, run by Microsoft subsidiary Danger, lost customer data. What impact might that have on people's perceptions of Microsoft as an organisation and as a cloud supplier?
As a user of technology and as a responsible vendor of technology, it's very difficult when customers lose data. It just doesn't feel good. And we're doing everything in our power to make sure that situation is rectified and that it doesn't happen again.

At a macro level, software and IT are inherently risky, so you do everything you can to create redundancy so customers don't have these kinds of challenges.

So will this [data loss] slow down the move to cloud computing? Things like this happen. Google's had outages, MSN has had outages, IBM has had outages. These things are unfortunate — we sometimes lose power. Does that mean we somehow lose confidence in the ability of the grid to provide power to us?

What it does mean is that customers will be circumspect about which work moves [to the cloud] soonest rather than last. What kind of work has either the highest latency or is the lowest mission-critical? What's already happened is that messaging has moved that way, because it's a store-and-forward infrastructure and because you can create some redundancy.

But if I had an emergency service operating police and ambulances, that can't go down. If that goes down, you have a different problem — people die. So they are not likely to move those kinds of work any time soon.

Surely part of the issue here is how an organisation handles a data loss like that. So will Microsoft be quite open about what happened and give people a clear account of events and countermeasures?
I'm not close enough to what actually has been said. But I think the basic principle of what you've just described is always true. You'll always be better in the mind of the customer and in the market if you're transparent about where things are.

And if something happened and it's your fault, take accountability for it. And if something happened and you don't want to throw your partner under the bus, don't throw your partner under the bus. The most important thing is you do the right thing by the customer and whatever happens to take care of that is what has to happen.

Editorial standards