X
Innovation

Red Hat: KVM is vital for the cloud's future

Red Hat's chief executive Jim Whitehurst discusses the importance of the KVM hypervisor over VMware's and how the need for governments to regulate the cloud is decades away
Written by Jack Clark, Contributor

As more and more services are divorced from hardware and moved into the cloud, virtualisation grows ever more important.

VMware is the leader in the market, with huge penetration of its ESX and ESXi hypervisors and its own range of cloud services, from the just-announced Horizon App Manager identity authentication system, to its Cloud Foundry platform-as-a-service.

But many companies are worried that the widening use of the ESX and ESXi hypervisors could lock customers into VMware products, thereby shrinking choice and the market for rivals.

Red Hat's chief executive Jim Whitehurst believes KVM — the Linux kernel-based open-source hypervisor — is crucial to the uptake of a more open cloud. He also believes open-source systems, such as those made by Red Hat, are going to gain the same level of dominance in the cloud as they have in the server and datacentre market, but that it is going to take time.

Q: How important is the KVM hypervisor going to be for Red Hat?
A: For Red Hat products, we always say 'choice, choice, choice'. It is not directly important for our specific products. Red Hat Enterprise Linux runs and is certified on ESX. You can build and take our cloud foundations offerings, and our CloudForms, and all of that will run on ESX or Hyper-V. So from a specific product perspective, it's not that important.

But [KVM] is extremely important for the development and direction of cloud. If we're going to build cloud architectures in the same proprietary way that software has developed in the past, then what we're really doing is picking the next Microsoft.

If we're picking the Microsoft of the cloud era then — because [as an open-source company] we don't play that game — our product stack will fit into someone else's dominant position, where they are dictating to customers the pace with which new hardware is enabled.

For us, it is critical that KVM is a major player to give customers choice and to let ecosystems drive [innovation] forward.

Given the prevalence of open-source software in the datacentre, could a 'cloud cold war' between proprietary and open-source companies develop and bring development to an impasse?
That's happening. You have the Azure vision on one side, which is the Microsoft walled garden. And then you have the free world, where you have the open world of Amazon, Google and a whole bunch of others that are all running their own flavours but fundamentally running an open-source stack.

But that's a high-class battle for us to get to fight. Wind me up with tens of millions of people working together on open source and ask me how I'm going to be successful in this versus a big company like VMware and [the answer would be:] "Well, Google is using this [open-source] stuff to run all its datacentres in the world and they're contributing back and Facebook is and Twitter is." We're all contributing back.

Do I feel like I'm going to have a stack that can deliver what the average enterprise needs to run a mere 50,000 servers? Absolutely. There's a certain kind of confidence that we can have in that, because of this faith in the power of participation. Google isn't there contributing stuff for fun, it's stuff they need to run a big datacentre.

So it's the user-driven innovation, where the biggest datacentres in the world are running this stuff, that makes me confident that an open way where users are driving the technology agenda will probably end up meeting what users want.

Mime co-creator Nathaniel Borenstein said he believes the cloud could get to a point where the government will need to step in and regulate competition. Do you think there could be scope for this?
Whenever you get down to a small oligopoly you have to think hard about regulation. I think that is a problem that my kids might need to deal with. Think how long it took for mainframe to go to client/server. We talk about it as a transition, but it was a 30-year transition.

Whenever you get down to a small oligopoly you have to think hard about regulation. I think that is a problem that my kids might need to deal with.

Is that possible? Maybe, but I think we're so far away from being able to know, because the [real] question is: "Can workloads ever be so homogenous that there can be only two different ways to do it?"

Coming from my prior industry — the airline industry — you can almost think computing is like transportation, and you can argue that there are subsectors of that. Some of [the transportation is] done by shipping; when things need to go fast you have airplanes, whether passenger or FedEx; you have roads, which allow for more individuality; and you have railroads, where you have to consolidate things, but it's much more energy efficient. And so you have a whole system around transportation.

I imagine [cloud computing] would be more likely to work out that way than consolidating down to one or two.

Some things will be "lowest cost, lowest cost, lowest cost", but other things [will not]. When you go to your ATM and it shows your bank balance, that can't be 99.9-percent right — it has to be 100-percent right. The service-level agreements around 100 percent versus 99.9 percent are night and day.

This is one of the things I talk to CIOs about — they're being pressured to do what Google does, but Google keeps most of its products in beta all the time. Often you'll do a web search and you'll go, 'Hmm, it's crapped out, better refresh'.

The infrastructure has hiccups?
It has hiccups. But for a bank when someone goes to the ATM, it's got to work, right? When you're transitioning hundreds of millions of dollars around, 99.999 percent doesn't even cut it. There are going to be people who need different levels of security, different levels of performance in terms of speed, different levels of reliability. Some people are going to want the absolutely lowest possible cost.

Could you get to three people doing the real commodity low-cost stuff? Maybe, but that's something my kids can worry about.

If data can be transported via multiple means, then that's the dream world?
Right. But think how it used to cost $1,000 to fly Transcontinental, you can now fly for $99. That's not because airlines don't want to fly $1,000 — trust me, I've been there, we love charging people lots of money — but it's developed in a way where they have to offer these fares to people and it's a very transparent marketplace.

If you had to choose and say for the next 10 years you'll fly American Airlines, [then] every year American Airlines gets to decide what to charge you. Now you've committed to flying, what are they going to charge you to fly transcontinental in two years? That's where you get into business models, and that's where you can start having the equivalent of lousy service and high prices if you don't have some forcing mechanism there.

The reason why prices have come down is it's a two-letter code and a dollar price at the point of purchase. It's BA or UA, and there's your price. That ultimate price is what drives the value to the consumer rather than the supplier, and that's a positive thing.


Get the latest technology news and analysis, blogs and reviews delivered directly to your inbox with ZDNet UK's newsletters.
Editorial standards