What's the best blade server?

What's the best blade server?

Summary: Blade servers were once the saviours of the datacentre. Expandability was king. But do blade servers still make sense today? We find out if they're still worth it.

SHARE:

Blade servers were once the saviours of the datacentre. Expandability was king. But do blade servers still make sense today? We find out if they're still worth it.

It has been exactly six years since we last tested blade servers, so in technology years, (like dog years) this equates to several generations. Back in 2003, blade servers were emerging as the cutting edge technology (pardon the pun) — these days they have matured, and while some may find them routine, we at the lab consider the blade server's nuanced difference compared to traditional servers rather intriguing.

Since the advent of virtualisation and storage area networking (SAN), blade servers have enjoyed a renaissance in recent years.

Blade servers can be used for many tasks. The primary tasks they are given in most enterprises is to reduce the footprint in a datacentre, increase fail-over redundancy as well as reduce disparate server platform management overheads. A single seven Rack Unit (7RU) blade enclosure can house up to 14 blade servers. If each is equipped with two quad-core processors and 64GB RAM, it accumulates the power of 112 cores and 896GB RAM in just a 7RU of space. In a single rack (42RU) six of these beasts would bring a total of up to 672 cores and 5.376TB of RAM. Any cluster computing geek would dream of these footprint efficiencies.

While computing density is the bright side of the rainbow, there are, of course, downsides to the technology. A fundamental concern is described by the adage "keeping all of your eggs in one basket". For example, an enterprise consolidating its existing server base from 30 or 50 disparate legacy systems into one brand new shiny virtualised blade enclosure must give thought to this critical single point of failure.

Certainly you can ensure three blades (or more) in the enclosure mirror each other, with three redundant power supplies and a big healthy UPS to guarantee your power. You can have a great fibre SAN attached to manage all your storage needs, and you can even have multiple 10Gbps redundant network links between the switch on the chassis and your network, but what about the chassis itself? The enclosure — what if that fails? Even with a four-hour replacement warranty, it is a long time to wait for a fix on something that is handling the applications of 30 or 50 previous systems. The solution here is to build in failover redundancy should a chassis fail. In some cases this can mean replicating the whole arrangement and so potentially doubling your costs. Most sizable organisations do, in fact, do this by having a primary and secondary datacentre geographically removed from each other.

Another basic concern will be the capability of your datacentre or computer room, to physically handle these machines. Blade servers are notorious for creating new and dense hot spots. It is common sense that the more processing you pack into a smaller space, the greater the heat output and therefore the greater the need for heat airflow containment and cooling systems. It is amazing how many architects simply don't plan for this.

Another common oversight is power utilisation. Theoretically, blade servers use less power to perform tasks than their stand-alone siblings (particularly when used in a virtualised environment), but as they are aggregated they will collectively use more. In the past, one rack might have been filled with 10 servers (40 CPUs). Compare this with 84 blade servers (336 CPUs) in the same space, the power requirements of 336 CPUs vs. 40 CPUs is going to be far greater. Some datacentres will simply not have the raw capacity to supply those loads. Of course, planning is needed to ensure that the location of equipment is able to cope with the projected load.

Without further ado we delve into the blades to see if they are still cutting edge technology.

Topics: Dell, Hewlett-Packard, IBM, Oracle, Servers, IT Employment

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

2 comments
Log in or register to join the discussion
  • Pointless Benchmaking

    Using standard benchmaking software for comparsion is pointless.

    I could ask a layman to install-and-click the result

    I have yet to see a 16 years veteran show his talent in benchmarking
    anonymous
  • Rack density

    I doubt you'd get 43 Ru of blades into a rack.

    I believe HP recommends 3 c-class chassis per rack, so you may want to lower your core/RAM per rack figures a little...
    anonymous