Datacentre managers have more headaches now than ever, as power costs spiral, IT budgets remain flat and the business demands more from the datacentre. For a growing number of enterprises, blade servers are the solution to this conundrum.
IT managers are finding that their servers are costing more to run. Although hardware purchase prices have remained fairly static, according to IDC, the annual cost of running a server is rocketing — mostly due to the cost of managing it and supplying it with power and cooling.
The energy question
Servers in datacentres are major consumers of electricity — indeed, it is estimated that over one per cent of the US's total electricity bill is charged to datacentre owners. This is the equivalent of five typical nuclear or coal power plants and resulted in 2005 in an expenditure on electricity of $7.2 billion. It's all the result of a huge growth in the numbers of servers being installed, with the volumes having doubled over the five years from the year 2000.
Datacentres need to reconcile demands for ever more computing and storage capacity with the need to keep electricity costs to a minimum, for both economic and environmental reasons.
As the computing power of processors has increased over those five years, so has their energy consumption. And it takes as much power to cool a server as it uses to do useful work. In other words, for each watt consumed processing invoices and carrying out other business computing tasks, another watt is required for cooling. So where a rack of servers might have consumed 5 kilowatts five years ago, now it might consume five times more. Between 10 to 30 per cent of the typical enterprise's IT bill is spent on electricity, a number that could hit 50 per cent in a few years, according to Gartner.
It's no coincidence that this is becoming an issue, now that both energy prices and supply are at their most unstable for a generation. It also coincides with a growing acceptance by populace and politicians alike of the need to take the problem of greenhouse-gas-driven global warming more seriously.
In today's world, profligate energy usage is becoming increasingly unacceptable. Enterprises need to be seen to be green — or at least greenish — and, if they can cut their power bills at the same time, then it bolsters their story.
However, in practice it's unlikely that many datacentre managers will be able to cut power consumption significantly — although they might manage to hold it flat. Only in the last two or three years have more energy-efficient, server-oriented processors started to appear, and their deployment has allowed businesses to do more computing while keeping within the same power envelope as before.
Datacentre managers challenged
So datacentre managers face a number of challenges. The first of these is the insatiable demand for computing power as the world goes online. Companies are increasingly relying on their IT infrastructure — their datacentres drive their online businesses, and they're demanding more from their infrastructure to perform tasks such as service-oriented architecture (SOA) applications, running large databases such as SAP, Oracle and SalesForce, while maintaining servers for email and other office-based tasks.
All of these applications are CPU- or bandwidth-hungry, which means that they consume power and generate heat. At the same time, IT managers are under pressure to constrain their IT budgets — they need to do more with less. This means that they cannot just add more servers: not only is the power bill becoming significant, but the limits of the ability of power utilities to meet the datacentre's demands have, in many cases, already been reached.
Space is a problem too: many datacentres are simply full and cannot accept any more servers, even if the power is available to drive them. The only feasible solution is to increase the efficiency of the servers already present.
The blade advantage
The solution for many is the blade server. Blade server revenues have jumped 30 per cent in the last year, according to Gartner's latest figures for this market.
Blades modularise the server. They consist of plug-in boards carrying the processors, local storage containing usually just the OS and associated applications, memory and some form of connectivity.
Modular blade servers such as IBM's BladeCenter system help to maximise server efficiency. However, once you've bought a vendor's chassis you may be locked into its products, as there are no open standards in this area.
The advantages of blades are manifold. Key among them is the fact that you need only one set of cables for all the servers in the chassis. This saves money on the cables themselves, a cost that quickly adds up when long fibre cables are involved, and on their management.
For example, a rack or tower server is likely to need at least six cables plugged into it — keyboard, video and mouse (KVM) connections, Ethernet, Fibre Channel and power. Not only are those cables points of failure, they cost time (and therefore money) to manage, and they can be costly to buy. So instead of six cables hanging out of the back of each device, one set of cables connected to a blade chassis supplies up to 14 servers.
Blades also occupy less space: you can almost double the density of servers in a rack with a blade chassis, largely because the common elements, such as power supply, connectivity, management and the chassis are shared. And they use less power for similar reasons. For example, each power supply unit (PSU) in a standalone server is unlikely to be running at or near its full capacity. And PSUs get less efficient the lower the percentage of their maximum rating they run at.
Managing blades is also easier if they live in a chassis that supplies common services, which then allows the datacentre manager to allocate resources to those servers that need them, while shutting down those that aren't required. This is particularly relevant in a consolidated datacentre, where many, if not most, of the servers are virtualised.
A server CPU running at 20 per cent utilisation rate consumes almost as much power as it would running at full tilt. So business logic dictates that it's far more energy-efficient to use fewer physical servers and run them at high utilisation rates. This is increasingly achieved via virtualisation technology, which allows IT managers to consolidate their servers into virtual machines. Virtual servers can then be moved around the datacentre in order to ensure that the hardware is fully utilised.
So from a business perspective, blades can save space, management time and energy — and money.
Standards, what standards?
But it's not all a bed of roses. Blade server vendors build chassis into which only their own products will fit — there are no standards in this area, nor does it appear likely that there will be any in the near future. Scratch a blade server vendor and they'll admit that this lack of standards means their business model is similar to that of inkjet printer vendors. In other words, they'll sell you the basic hardware at a relatively low cost — for blade market leaders HP and IBM, that means a chassis costs under £3,000 — because that locks you into their system architecture. Future purchases will need to be compatible with that chassis, and this is where the serious spending starts. The cost of switching your chassis vendor is always more than the cost of continuing with your original choice.
Even so, since blade server sales constitute the fastest-growing part of the market, it's clear that blades are not just here to stay, they're the future of the datacentre. By 2010, one in every four servers bought will be a blade and it'll be an $11 billion market — about four times the size of today's blade server market — according to analysts at IDC, with the numbers of rack and tower servers remaining static.
So, given the quantities in which they're buying them, it's hard not to conclude that enterprises appear to be increasingly convinced of the vendors' arguments that blades are more energy- and space-efficient, and more manageable — in short, that they cost less in the long run than traditional servers.