'

HP heads into battle over virtual networking

HP's ProCurve chief Paul Congdon talks about how the hardware maker sees switches taking on virtual networking in the data center and how HP trumped Cisco at the IEEE standards committee.

There's more virtualization than ever before being installed in data centers. Each host server runs a hypervisor that includes a virtual switch — a piece of software that manages networked data flows to and from the virtual machine under its control.

In addition, switches are no longer just switches. Their functionality has extended considerably over recent years, so, for example, they also act as firewalls, access control lists, perform intrusion-prevention tasks such as deep packet inspection, and balance loads across Ethernet ports.

As a result, the performance, resource implications and management issues of virtual switches are of increasing concern.

An industry standard called the Virtual Ethernet Port Aggregator (VEPA) has been proposed and agreed — although not without some healthy differences of opinion from contributing companies. HP's way of approaching the question is standards-based, HP executive Paul Congdon told ZDNet UK. In other words, Congdon said, it does not tamper with the Ethernet packet format — whereas Cisco's approach does, in order to aid integration with its switches, as part of its recently announced UCS initiative.

Congdon, chief technology officer of HP's ProCurve network hardware division, talked to ZDNet UK about the battle over virtual networking standards at the IEEE 802.1 committee, and other issues of the day.

Q: Network infrastructure has become a lot more complex. How has the cloud changed the switch?
A: It's all about the convergence of compute and storage today, such as blades. It's about that convergence — where does the server end, and the switch begin? We've opened up the switching fabric for applications to sit on top of it.

How do you see the take-up of 10Gb Ethernet developing, and why?
Generally speaking, people are putting off spending on newer architectures and technology, but 10Gb is really considered a mainstream technology now, so we haven't seen people holding off.

There are two reasons for this. First is because of the general move of computing towards the data center and, second, we've been successful in establishing 1Gb Ethernet to the desktop. Enterprises weren't so worried about matching bandwidth, as it aggregated at the centre as they are now, due to the extra power on the desktop and 802.11n wireless speeds. They're conscious of that.

From a networking viewpoint, where is virtualization technology headed?
There are subtle things that are very interesting in the networking space. The key impact is migration, such as [VMware's tool for live migration of virtual machines] VMotion, that creates an environment where servers are mobile now in the data center. That puts a strain on the network fabric, so the network must be dynamically reconfigurable to make that happen.

Also, many of the issues we've been working on for years in the client space, such as security — some of those technologies will need to find their way into the data center control system. People are now comfortable with the data center's physical security, but with VMotion and new capabilities, we need to start looking at how we secure VMotion, which brings 802.1x into focus.

And where does that take the data center?
What we've been doing over last couple of decades has been decomposing the mainframe, from mini-computers, distributed environments, to virtualization environments and so on. But that's created a management burden with lots of different management domains.

So the future is addressing that burden. There's a need to push things back to a larger building block that you provision from, instead of dealing with individual bits of computer storage and network. You need to deal with them in one piece.

The blade server is an example: we collapsed things into a common enclosure. We used to run different networks on backplane in the same enclosure, but now it's all converged onto Ethernet. The same thing will happen with the shared I/O environment in the blade chassis, which then allows you to provision things at a higher level.

Let's get to specifics. HP and Cisco went head to head in the standards body recently over the Virtual Ethernet Port Aggregator (VEPA). What was all that about?
With the advent of virtualization, we now have an Ethernet switch in every hypervisor. It's been beneficial to VMs because they get connectivity, but the capabilities of switches have continued to grow. For example, we have 16 active projects in [the IEEE] 802.1 [committee] being added to switches.

So we need to ask: how much functionality should I add to a switch when it might be running on a hypervisor? It gets more expensive, and it soaks up CPU cycles and power consumption. You have to ask whether every server needs it. How can we draw a line in the sand and indicate that you no longer have to put every feature of a switch in the host server?

At HP, we can change the packet-forwarding behavior to reduce the virtual capabilities, and that forces packets to go external from the host, so an external switch can apply those features instead.

Can you give me some examples of the kinds of features you're talking about?
Examples include ACL [access control lists] and firewall features — do you want to have the hypervisor do that, which uses up a lot of CPU cycles? The end user buys a hypervisor to run applications, not a firewall, so it's best to do this in the switch, as silicon's capabilities are going up dramatically.

So how can we augment the hypervisor switch to apply network policy?
The important thing is that ours, unlike Cisco's proprietary approach, is that it uses address tables, and other standard features of Ethernet packets without proprietary extensions.

How does HP's version of VEPA differ from Cisco's?
We had a huge battle in the standards body because their approach was hugely different from everyone else's. But we're now at a base standard that's an HP proposal that Cisco can build on. They have an additional tag that does meet their requirements. It's good how it wound up.

You're vice chair of that committee. How independent can you be?
Well, someone is paying your paycheck, so you're always influenced by their views. But in this instance, I wanted to create a natural evolution, to meet customer requirements using a technical approach, not an HP approach.

This article was originally posted on ZDNet UK.