X
Business

What's the best blade server?

Blade servers were once the saviours of the datacentre. Expandability was king. But do blade servers still make sense today? We find out if they're still worth it.
Written by Andrew Bubic, Contributor and  Matt Tett, Contributor

Blade servers were once the saviours of the datacentre. Expandability was king. But do blade servers still make sense today? We find out if they're still worth it.

It has been exactly six years since we last tested blade servers, so in technology years, (like dog years) this equates to several generations. Back in 2003, blade servers were emerging as the cutting edge technology (pardon the pun) — these days they have matured, and while some may find them routine, we at the lab consider the blade server's nuanced difference compared to traditional servers rather intriguing.

Since the advent of virtualisation and storage area networking (SAN), blade servers have enjoyed a renaissance in recent years.

Blade servers can be used for many tasks. The primary tasks they are given in most enterprises is to reduce the footprint in a datacentre, increase fail-over redundancy as well as reduce disparate server platform management overheads. A single seven Rack Unit (7RU) blade enclosure can house up to 14 blade servers. If each is equipped with two quad-core processors and 64GB RAM, it accumulates the power of 112 cores and 896GB RAM in just a 7RU of space. In a single rack (42RU) six of these beasts would bring a total of up to 672 cores and 5.376TB of RAM. Any cluster computing geek would dream of these footprint efficiencies.

While computing density is the bright side of the rainbow, there are, of course, downsides to the technology. A fundamental concern is described by the adage "keeping all of your eggs in one basket". For example, an enterprise consolidating its existing server base from 30 or 50 disparate legacy systems into one brand new shiny virtualised blade enclosure must give thought to this critical single point of failure.

Certainly you can ensure three blades (or more) in the enclosure mirror each other, with three redundant power supplies and a big healthy UPS to guarantee your power. You can have a great fibre SAN attached to manage all your storage needs, and you can even have multiple 10Gbps redundant network links between the switch on the chassis and your network, but what about the chassis itself? The enclosure — what if that fails? Even with a four-hour replacement warranty, it is a long time to wait for a fix on something that is handling the applications of 30 or 50 previous systems. The solution here is to build in failover redundancy should a chassis fail. In some cases this can mean replicating the whole arrangement and so potentially doubling your costs. Most sizable organisations do, in fact, do this by having a primary and secondary datacentre geographically removed from each other.

Another basic concern will be the capability of your datacentre or computer room, to physically handle these machines. Blade servers are notorious for creating new and dense hot spots. It is common sense that the more processing you pack into a smaller space, the greater the heat output and therefore the greater the need for heat airflow containment and cooling systems. It is amazing how many architects simply don't plan for this.

Another common oversight is power utilisation. Theoretically, blade servers use less power to perform tasks than their stand-alone siblings (particularly when used in a virtualised environment), but as they are aggregated they will collectively use more. In the past, one rack might have been filled with 10 servers (40 CPUs). Compare this with 84 blade servers (336 CPUs) in the same space, the power requirements of 336 CPUs vs. 40 CPUs is going to be far greater. Some datacentres will simply not have the raw capacity to supply those loads. Of course, planning is needed to ensure that the location of equipment is able to cope with the projected load.

Without further ado we delve into the blades to see if they are still cutting edge technology.

HP BladeSystem

hp-enclosure.jpg

(Credit: HP)


This is one of the biggest stand-alone (non-rack-mountable) servers we have seen. Big, black, bold and with a price tag to match. The unit carries the moniker HP BladeSystem c3000 Enclosure Tower. In reality, HP has come up with an ingenious way of taking a 6RU server, mounting it vertically, attaching a housing and some skateboard wheels and turning it into a pedestal server.

The device itself can house eight blade servers and includes a modular I/O slot. The test unit we received included a neat pop-out 3-inch colour LCD management display, a DB9M serial console port and USB port, an in-built DVD RW drive, as well as a recessed reset button and three status LEDs.

Both the sides of the unit and its base are solid. The enclosures dimensions are a whopping 270mm wide by 780mm deep by 540mm high (including the wheels).

At the rear of the machine are four modular bays, two of which (on our unit) were populated with fibre modules. There are also two smaller modular bays, one of which was populated with an RJ45 connected integrated lights out (iLO) and UID module.

The majority of the real estate of this device is consumed with blower modules. These are not run-of-the-mill fans, but are very definitely blowers. When this server is powered up, the blowers immediately go to their maximum cycle. We are surprised that HP's design engineers didn't fit the wheels with ceramic disc brakes, because these blowers are not far off being jet turbines — and there are six of them installed. There are also six hot-swappable redundant power supply units, each with its own fan. The downside to all this air management is the noise — even when idling this thing emits 72dBA on our sound meter. During start-up it recorded levels closer to 97dBA.

The test unit was supplied with four blades — two BL460 G6s and two BL490c G6s. The 460s each included two 2.5-inch 72GB 10k SAS hard drives, which can be removed from the front of the blade without needing tools, while the 490s are diskless.

Each blade server measures 180(H)x50(W)x485(D)mm. Remote access and management is possible, alternatively operators can use a dongle to connect to each blade for direct access. The HP dongle has a standard 15-pin monitor connector, two USB connectors and one DB9M serial connector. We found that the dongle didn't latch on to the blade very well, although we would expect most administrators would leave the noise in the data room and access the servers remotely. We also found that if the dongle was plugged into the blade that was occupying slot 5 it became impossible to swing the 3-inch management screen flat against the front of the enclosure. The front of each blade also has four LED indicators for UID, disk, flex 1 and 2 and power, and there is also a power button with an integrated LED.

The BL490c G6 is perfect for virtualisation projects, two great benefits are the amount of memory that can be installed, 18 DIMM slots in a half-height server blade and the embedded Flex-10 capable controllers offer up to 8 integrated FlexNICs.

HP's Virtual Connect technology ends the switch versus pass-through debate by virtualising the server edge — where your server blades connect to your Local Area Network (LAN) and your SAN. Furthermore, HP states that its Virtual Connect Flex-10 provides for up to four times the number of supported NICs per server without increasing the number of switches required to connect them. Each physical NIC can be configured from 100MB up to 10GB depending on application needs. Administrators can add or replace a server blade in a matter of minutes. It also relieves LAN and SAN administrators of the complex choreography needed to follow the moving server blade, and to manage countless connections at the server edge.

This server is very well engineered. Of particular note is the pop-out 3-inch LCD management screen controlled by using just five buttons. The fact that it is a pedestal system is a little strange, we could not imagine too many users opting for this configuration unless it was used in the development, testing or a pre-staging environment. Primarily due to the amount of noise generated by the unit, it should be locked up in a computer room or datacentre where racks are usually dominant.

The cost of this unit (test configuration) was almost $100k; however, a large chunk of this includes the advanced networking modules. If you are into powerful virtualised environments, over time the savings in labour costs may outweigh the extra costs of the technology. Three years on-site parts and labour warranty is excellent and rarely seen these days.

The bottom line Very well engineered, very flexible, unique and good management features.
Vendor HP
Product HP BladeSystem c3000 Enclosure (AU$12,000 – AU$15,000)
HP BLc Virtual Connect 4GB FC Opt Kit (AU$18,975)
HP BLc VC Flex-10 Enet Module Opt (AU$24,500)
HP ProLiant BL460c Generation 6 Server Blade x2 (AU$4950 – AU$8000 excluding accessories)
HP ProLiant BL490c Generation 6 Server Blade x2 (AU$7700 – AU$12,000 excluding accessories)
Price of unit tested RRP: AU$96,320
Warranty and support Three years parts; three years labour; three years on-site
Support method: telephone
The good Very modular
Innovative design
Good to see a pedestal blade server
The bad Very large
Noisy
Blade module 5 dongle interferes with 3-inch management display

IBM BladeCenter

ibm-enclosure.jpg

(Credit: IBM)


IBM has named its blade enclosure BladeCenter. Those of us using British English will just have to tough it out.

The BladeCenter's chassis is a standard 7RU, the enclosure can house up to six blade servers. It includes a modular slot for a management terminal as well as two other modular slots for other units, such as disk modules. The management module included a CD RW/DVD Multi optical drive along with two USB ports and five status LEDs. The chassis measures 305(H)x740(D)x445(W)mm — not including the rack "ears".

The rear of the unit features four hot-swappable power supply units (each with a small ducted fan), and four larger fan modules (each made up of two 3.5-inch fans). The noise from these fans is relatively low whilst idling (68dBA), but with one of the four modules removed the noise ratchets to 89dBA.

On the rear of the unit are six modular bays used for I/O and management. The modules we were shipped for testing include an SAS connectivity unit, a six-port copper gigabit Ethernet switch and a management module that included two USB, one 15-pin monitor port and a RJ45 network port.

The unit came installed with two HS22 blade servers. The HS22 has two internal 2.5-inch SAS 146GB hard disk drives. Each blade server measures 30(W)x245(H)x455(D)mm.

IBM has integrated its server, storage and networking management tools into the one chassis. The BladeCenter can also accommodate a fully redundant and fully integrated SAS-based SAN, providing an entire virtualised solution in one box that includes servers, networking and storage without having to cable up anything other than external power and network connections.

IBM has gone aggressively down the power-saving path with its blades. For example, the HS22 has built-in sensors such as an altimeter that allows it to optimise cooling based on its elevation. IBM claims that these add up to a 93 per cent improvement in energy savings over its previous generation of rack servers.

Time and again IBM is a favourite with enterprise operators and administrators. Its high quality and labelling consistency always stand out. Rarely is anything ever out of place, and although its products are never flashy, this makes IBM attractive.

When compared to HP, however, IBM's direct blade management and monitoring systems fall behind.

The price of its servers and components is very good. A fully-populated server would be comparable in cost to its non-blade cousins. Its three-year warranty is also very good.

The bottom line Overall, it's a robust solid performer that delivers consistency and quality.
Vendor IBM Australia Ltd
Product IBM BladeCenter S Chassis (AU$4189)
2x BladeCenter HS22 Blades (AU$3259 – AU$6399)
Price of unit tested RRP: AU$31,479
Warranty and support BladeCenter S comes with a three-year customer replaceable unit and on-site limited warranty
BladeCenter HS22 comes with a three-year customer replaceable unit and on-site and off-site limited warranty
The good Consistency across the enterprise
Good honest levels of quality
Excellent labelling
The bad Direct/Physical device management could be further developed
Supports just six blade servers in a 7RU space

Benchmarks and tests

Sandra Pro 2009
Sandra Pro is a low-level benchmarking tool that focuses on sub-system performance rather than whole of system performance. This allows the TestLab to identify strengths and weaknesses in a particular sub-system. The tests utilised are:

  • CPU Arithmetic Benchmark (MP/MT support)
  • CPU Multimedia Benchmark (including MMX, MMX Enh, 3DNow!, 3DNow! Enh, SSE(2)) (MP/MT support)
  • Memory Bandwidth Benchmark (MP/MT support)

Sungard Adaptiv Credit Risk
Sungard Adaptiv Credit Risk application is a component of Sungard's comprehensive suite of risk management products. This workload is a scaled down version of the full application. At its core the application utilises a proprietary Monte Carlo method financial engine to determine the future value of a fictitious portfolio.

This package consists of a Microsoft Windows-based .NET application and two data files — a sample market data and a sample portfolio, which provide input to the financial engine.

The benchmark is self timed in seconds and the shorter the run time, the better the performance.

Cinebench 9.5
Cinebench is the free benchmarking tool for Windows and Mac OS-based on the powerful 3D software Cinema 4D. The tool is set to deliver accurate benchmarks by testing not only a computer's raw processing speed but also all other areas that affect system performance such as OpenGL, multithreading, multiprocessors and Intel's HT Technology.

Cinebench includes render tasks that test the performance of up to 16 multiprocessors on the same computer as well as software-only shading tests and OpenGL shading tests on huge numbers of animated polygons that will push any computer to its limits.

Power usage
The TestLab uses a hardware power meter to take in-line readings of the power usage measured in Watts between the wall socket and the device being tested. Recording the peak and idle values.

Blade server configurations

 HP BL460cIBM HS22
CPU typeIntel Xeon E5530 2.40GHz quad-coreIntel Xeon E5530 2.40GHz quad-core
Number of CPUs22
Memory type 1Micron MT36JBZF51272PY-1G4D1BA 4GB DDR3Micron MT36JBZS51272PY-1G4D1BA 4GB DDR3
Memory type 2Micron MT18JSF25672PDZ-1G4D1AB 2GB DDR3Not applicable
Amount of memory4x 4GB / 3x 2GB = 22GB total6x 4GB = 24GB total
Hard drive brandHP DG072BB975Seagate Savvio ST9146803SS
Hard drive capacity72GB146GB
Hard drive speed10,000rpm10,000rpm

Test results

It is a tightly contested race between these two industry heavyweights, each pulling ahead of the other at different times.

HP's BL460c misses out to IBM's HS22 in Arithmetic tests, but performs better in Multimedia testing. As is to be expected, with such a behemoth of an enclosure (and running four servers as opposed to two servers for IBM) the HP BL460c uses more energy; however, at idle it only uses about a quarter more than IBM's machine. Given that the HP BL460c has twice as many servers it would be reasonable to expect it to use half as much energy again so that was a pleasant surprise.

In the Sungard tests HP performs best, but in the Cinebench testing IBM edges ahead. The biggest surprise and something that we can't quite put our finger on is the Memory bandwidth tests. While IBM performed comfortably with scores in the low 30s, HP flat lined at 10GB/sec. The only physical disparity between the memory configurations of the two servers is that the IBM blade is configured with 24GB total RAM, while the HP has 22GB, which is a slight mismatch (HP engineers populated it with disparate 2GB and 4GB modules!).

To summarise such a tightly contested race is difficult given the similar performance that favoured each vendor in different ways. If it comes down to a photo finish Enex would say IBM comes in by the tip of a nose in front.

sungard.jpg

sandra-mem.png

sandra-multi.png

sandra-arith.png

cinebench.png

power-output.png

(Credit: Enex)


There are two additional products from Sun and Dell that Enex feels should also be presented, but unfortunately they didn't arrive in time for this review.

Sun is at home in the datacentre. Now that it is owned by Oracle we hope that its ambitions in the enterprise market will not abate. Blades make a natural case for an organisation like Sun because its clients traditionally seek space saving and higher performance capacity per rack unit.

In 2003, Dell was the best solution for an office that needs a powerful solution with more storage in our blade server test. It is interesting to see where it has come since then, albeit on paper.

As with most reviews, vendors always have to scramble to get their latest and greatest products to the lab in time for testing. But in many cases vendors may not even have such brand new technology in the country yet (particularly for larger enterprise products).

On other occasions the few evaluation units they might have available to loan (think of the enormous cost of lending kit such as this) are already being used in a corporate or government evaluation exercise. So we, and the many other organisations that are eager to see such exciting new products, will often need to take a number and be patient.

Editorial deadlines are always a time challenge. Moving heavy, delicate and expensive technology quickly around the country is never simple. Vendors will always try to make their most up-to-date product available, but getting brand new (or even prototype) products ready in time is difficult. We're often informed of a new and improved model that will be available "really soon" if we could only wait a few more weeks.

Many of the products that Enex reviews for you are pre-release versions and may be mere ghosts of what will finally be released and for sale.

Dell M1000e

dell-enclosure.jpg

(Credit: Dell)


Dell's latest enclosure is the M1000e, its Xeon 5500 equipped blade server is the M610. The M1000e consumes 10RU of space, which means that in a 42RU rack, only four of these can been installed.

What it takes up in space it compensates for in capacity, supporting 16 blades. The design shows that Dell, like HP, has taken the next step in blade generations.

The front of the unit is mostly taken up with the blades, while there is a small and innovative interactive colour LCD management module that flips around on the base of the device. There is also two USB ports, a DB15 monitor connector and power button/status LED.

The rear includes six redundant power supplies, nine large fans and several modular I/O bays. Also on the rear is an integrated KVM switch, and chassis controllers.

The M610 blade server is a half-height design, which packs the new Intel Xeon 5500s, and doubles the memory capacity of its predecessor. An interesting feature of the M610 is its ability to take an SD or internal USB connected memory device with an embedded hypervisor. This enables an administrator to load instantly into a virtualised environment. The front of this blade provides access to two 2.5-inch SAS hard disk drive bays and has a power status LED, two USB ports and a power button.

Dell makes a bold claim that the M1000e is the most power efficient blade solution on the market. It states that the M1000e takes advantage of thermal design efficiencies such as ultra-efficient power supplies and dynamic power efficient fans with optimised airflow design, which enables better performance in a lower power envelope.

Dell's FlexIO modular I/O switch technology enables scalability providing additional uplink and stacking functionality. Its centralised management controllers provide redundant, secure access paths for administrators to manage multiple enclosures and blades from a single console integrated KVM solution. This has been a mainstay solution of the Dell blade server family. Dynamic power management provides the capability to set high/low power thresholds to help ensure blades operate within assigned power envelopes.

A medium to large enterprise looking to move a number of servers to a virtualised environment would find the M1000e enclosure and the M610 blades ideal. The pricing as often with Dell is very attractive as is the warranty at three years on-site.

The bottom line The Dell M1000e enclosure and the M610 blade server appears to be a well designed next-generation blade solution.
Vendor Dell
Product Dell M1000e enclosure
Dell M610 blade server
Price of unit tested RRP: AU$16,258
Warranty and support Three years
Support methods: on-site, phone, online
The good 16 servers in 10RU space
Innovative management features
The bad Very large, only four chassis per rack

Sun Blade

sun-enclosure.jpg

(Credit: Sun)


Sun is at home in the enterprise datacentre, so it makes sense that it is in the blade server business too. Sun nominated its Blade 6000 enclosure and Sun Blade X6270 servers for this review.

The 6000 chassis is a 10RU product that can house 10 servers, but there is not a great deal of space saving there; however, Sun's pitch is that it is more about the flexibility and ease of management rather than saving space. Indeed, Sun offers blades housing its own Sun UltraSparc processors, AMD Opterons and Intel Xeons.

The front of the unit is taken up with 10 large server module bays and two fan "drawers".

The X6270 server blade uses the Xeon 5500 series Intel processors. Each blade supports up to two of these and each blade also supports four 2.5-inch hard drives, which are accessed from the front of the unit. The rear of the unit provides access to a compact flash enabling boot images for virtualisation environments (similar to the Dell).

Sun claims that its unique multi-architecture chassis offers the most diverse choice of platforms within a single 10RU chassis. This enables AMD, Intel or Sparc CPUs within the same chassis to deliver Windows, Solaris, Linux, VMware solutions and so on.

Sun's approach to delivering blade I/O via dedicated PCIe ExpressModule cards allows simple delivery of complex I/O environments. Addressing can be decoupled from the blade itself as MAC and WWN addresses on the I/O cards are effectively assigned to slots. This means that the administrator can change or swap a server without needing to make any changes within the network or storage fabric.

Overall, Sun presents a very impressive blade solution, particularly for those enterprises looking for flexibility over density. Costs are excellent and it definitely competes with its peers. Sun's warranty and support is interesting; rather than charge support for every individual element of a blade deployment, Sun has a very simple model of paying just once on the chassis alone. Whether you choose to deploy one blade or 10 and irrespective of the types of blade or I/O options being deployed, the cost for support is the same.

The bottom line Sun presents a very flexible and cost effective solution.
Vendor Sun Microsystems
Product Sun Blade 6000 enclosure (AU$9000)
2x Sun Blade X6270 (AU$3767)
Price of unit tested RRP: AU$16,616
Warranty & support One to three years, 24/7 with two- or four-hour response
Support methods: email, web and telephone
The good Very well priced
Good support package benefiting heavy users
Flexible processor choices and servers all supported within one blade enclosure
The bad 10 servers in 10 rack units, does not really create denser computing

Conclusion

The biggest leap forward specifically for blade servers and their enclosures/chassis over the past six years has been the management of the products as well as increased I/O capabilities and options.

But along with blade server evolution we have seen two other key technologies adopted and growing in the enterprise that have undoubtedly aided blades, storage area networking (SAN) and virtualisation. Six years ago, when we last looked at blade servers, SANs were in their infancy, as was virtualisation.

Essentially, SANs have removed the need for having the applications, data and indeed a tailored operating system for the application/data residing on disks internal to the servers, which in terms of blades is potentially valuable real estate, and directly related to their ability to reduce physical server space in an enterprise's datacentre or computer room.

The second is virtualisation, this itself has facilitated the utility computing, or resources on demand nature that blades are ideally suited for, being able to plug in extra blade servers as demand for memory and CPU increases enables administrators greater flexibility than provisioning a whole new stand-alone server and the attendant resources (I/O, management, power, space etc) required. These two technologies have really been the driver behind the renaissance in blade computing. Our opinion is that without these two technologies, blades may have remained a fairly niche product.

While blade technologies themselves can offer flexible options to the enterprise it would pay dividends to clearly ascertain what your requirements are and match vendors to these, be it: platform flexibility, higher density computing, management, performance, redundancy or ease of administration.

To make a realistic choice between all of the vendors on offer here we need to look at the application and environment that the server will be used for. Factors such as the ability to consolidate a number of very diverse systems, saving space, getting the biggest bang for your buck, raw system and application performance and more should be considered.

However, in the end there can only be one winner. Due to its all-round capabilities and suitability to handling the demands of a medium enterprise, or even a development/staging platform for a large enterprise, and also because of its engineering and innovative design, one single vendor takes out top honours in 2009:

Congratulations, Hewlett-Packard

Editorial standards