The market for virtualised server I/O hots up

Summary:It looks like the market to rewire your datacentre by virtualising I/O is hotting up.In March, I wrote about Xsigo, and how it is bidding to centralise the arrays of cabling that infest server racks.

It looks like the market to rewire your datacentre by virtualising I/O is hotting up.

In March, I wrote about Xsigo, and how it is bidding to centralise the arrays of cabling that infest server racks. Instead, you install an Infiniband card in the server and connect it to a box at the top of the rack. Inside Xsigo's Fabric Director box is where the I/O cards - storage and networking - now reside and, to the server, they appear local.

One of the benefits of this approach, which is effectively a virtualisation of the server I/O stream, is that it allows you to switch I/O cards, as well as attach and detach them in software, all of which fits well with orchestrated virtual infrastructures. Other benefits claimed include improved performance and reduced operational expenditure, because there are fewer cables to manage and it's quicker to alter configurations. In theory, it could also allow you to get away with deploying fewer expensive components such as HBAs.

But Xsigo doesn't have the field to itself any more. NextIO has joined the fray with its vNET I/O Maestro. The company also promises to boost I/O and reduce costs -- but then, why else would you buy the concept let alone additional boxes that you'll need to manage, something neither vendor spends much time talking about?

NextIO's sales and marketing VP Mike Heumann reckoned that his company wasn't in competition with Xsigo but Cisco and Brocade -- it's good to be associated with the big beasts in the jungle. However, he tacitly admitted during our conversation that Xsigo is his main rival. NextIO's advantage over its longer-established rival is, according to Heumann, that potential customers liked Xsigo's idea but not its entry price point. So NextIO is undercutting Xsigo by focusing on deployments of one rack at a time, not four or five racks at a time. "They're at a higher level," he said.

This is how it works. "We run a PCI-X link between the server and our box, which then accepts all the adapters that are seen by the server as local," Heumann said. "You can then decide which adapters are presented to the servers and what percentage of bandwidth they can use, which you can change dynamically. It means you can do anything, including teaming and fail-over, that you would do locally. It also means you don't need to change your governance policies."

The typical cost per rack is about $75,000 for 10 servers, plus the cost of the PCI-X cards, which Heumann reckoned cost about $100 each.

Interestingly, Cisco is aware of the presence of Xsigo and its potential threat to its datacentre hegemony, and has reportedly alerted its sales and marketing teams to weaknesses in the virtualised I/O approach in general and Xsigo in particular.

Both companies have funding from the usual range of Silicon Valley investment houses, which suggests that there is a need for the technology. Let battle commence...

Topics: Networking

About

Editor, journalist, analyst, presenter and blogger. As well as blogging and writing news & features here on ZDNet, I work as a cloud analyst with STL Partners, and write for a number of other news and feature sites. I also provide research and analysis services, video and audio production, white papers, event photography, voiceo... Full Bio

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Related Stories

The best of ZDNet, delivered

You have been successfully signed up. To sign up for more newsletters or to manage your account, visit the Newsletter Subscription Center.
Subscription failed.