Addressing virtualization's achilles heel

Addressing virtualization's achilles heel

Summary: The benefits of virtualization are quite obvious but when you start to really increase the density of virtual machines in order to maximize utilization suddenly it ain't such a simple proposition. The latest CPUs from AMD and Intel are more than up to the task of running 10-20 or more applications at a time.

SHARE:

The benefits of virtualization are quite obvious but when you start to really increase the density of virtual machines in order to maximize utilization suddenly it ain't such a simple proposition. The latest CPUs from AMD and Intel are more than up to the task of running 10-20 or more applications at a time. Most servers run out of memory and I/O bandwidth well before processing power. Recent announcements from the leading server vendors have been made to address the memory side by packing more DIMMs onto a single motherboard (including blade server boards), but you can only add so many Ethernet cards and Fibre Channel HBAs. Oh yeah, and then there's the switch ports to go with them (blade systems help a lot here).If you are part of the elite group of infrastructure and operations managers who are pushing the VM density envelope, then 10GbE may be your better option. Most VMs individually don't consume the full bandwidth of a single GbE NIC but we are quickly seeing that the standard network configuration of ESX is 6 NICs and 2 FC ports per VM. The NICs are for console, vm kernel, and vm network and you need two of each, for redundancy, for a total of 6.  And each of these NIC connections requires a separate data center uplink cable. On top of this, the more VMs you add the more bandwidth is consumed which requires...more ports and that means a lot of connections. And even if each connection only consumes 10% of that 1 GbE of bandwidth each, you're running out of I/O very quickly. Plus every VM is sharing a limited set of physical NICs - heaven forbid you might actually want to do quality of service or give any of these VMs their own physical NIC, as is often the case.

10GbE can address the NIC sharing scenario and Ethernet storage solutions such as iSCSI and the forthcoming Fibre Channel over Ethernet (FCOE) - yes, I know Cisco says it's ready today - can save you tremendously on HBA costs. The need for more true physical connections is more of an issue.

The NIC vendors are addressing this scenario with SR-IOV (single-root I/O virtualization) technology that splits 10GbE NICs granulary and dynamically so you can set quality of service parameters for the virtual NICs that share these pipes. But it's a virtual solution; if you still need physical NICs you're out of luck.

To address this, HP has released Flex-10 Virtual Connect modules for its c-Class blade systems. These 10GbE switch modules (and this technology in implemented on its 10GbE NICs in the BL495c blade too) can physically split a single 10GbE connection into 4 physically discrete connections with tunable bandwidth (100Mbps increments up to 10Gbps per connection). With Flex-10 modules and BL495c blades each physical server gets 8 "physical" NICs (up to 24 with an expansion cards), which fan out to 384 "physical" connections coming out of a full bank of switch modules. You of course can blow out this number with virtual NICs per VM as not every VM will need its own physical NICs. And each of these connections can replace a FC port in an Ethernet storage configuration. If you want to pack a ton of VMs into a tiny package without sacrificing I/O performance this is an intriguing way to go. Even if you don't use Flex-10 for storage, the density benefits here are worth considering.

As we stated in our report on 10GbE futures, earlier this year, the move to 10 is a pricey upgrade today but more easily justified in IT infrastructure consolidation moves since so much more consolidation can be achieved. Blade servers and even VMware constantly face similar price justification challenges but are winning more and more customers through this same cost analysis. You'll have to include the switch upgrades in your analysis but if you can achieve 2x or greater consolidation in doing so, the investment may be well worth it.

Topics: Servers, Hardware, Networking, Virtualization

James Staten

About James Staten

James Staten is a Vice President and Principal Analyst at Forrester Research, serving Infrastructure and Operations professionals.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

2 comments
Log in or register to join the discussion
  • Just built a VMWare farm

    Have the servers using their hard drives directly from a SAN infrastructure. Then we have IBM servers which have 10 NICs each. ( two 4 port gig and 2 onboard ).

    As for sharing.. yes, you may run out of I/O on a server, but if you balance them properly, that shouldnt be an issue. The issues come to the front when you start trying to save money beyond a safe limit.

    With the vmware software and ibm hardware/support, we spent 14 grand on each server. dual quad core 2.5 with 32gb ram. I was told we could get up to around 30 servers on each one. Which makes the cost of each server 500+ licensing. Even i can only get 15 on each server.. the cost is still 1000 bucks which is around 2-3000 cheaper than a stand alone server.

    I dont count the SAN cause i would have bought that anyway. Though if i were to factor it in, it would be 1000 at 30 servers per physical server or 2000 for 15 servers
    Been_Done_Before
  • RE: Addressing virtualization's achilles heel

    Great article James. We really think 2009 is going to be an exciting time. Clearly, the next big leap forward for virtualization is the network layer.

    We agree that a lot of folks still balk at the cost of 10GbE. That's why we need more new ideas like Flex-10 to get customers there affordably and with more beneifts that pay for themselves.

    Jason - HP Blade Team
    TechBoom