A guide to server efficiency

Summary:The cost of powering and cooling server hardware is fast becoming a critical issue for IT managers, while the demand for ever more computing muscle in the datacentre continues to grow. What can be done? We examine some of the options.

With prices plummeting, there’s never been a better time to buy server hardware. Which is good, because it means you can get a lot more computing for your money. However, demand for processing power continues to rise, and despite the introduction of ever more powerful hardware, datacentre managers continue to cram yet more servers into already overcrowded racks. And that’s bad, because it leads to higher energy and cooling bills. So much so that it won’t be long before it costs more to simply power and cool a server than it does to buy it in the first place.

Fortunately there are measures that can be taken to alleviate the situation, and a number of new technologies to help; we’ll be outlining these in this guide to server efficiency.

Performance per what?
When it comes to consuming power, server processors are one of the biggest culprits, so it’s the processor vendors who are leading the present efficiency drive. Indeed, when illustrating how their products stack up against the competition, the likes of AMD, IBM, Intel and Sun have all but abandoned clock speed in favour of 'performance per watt'. This metric is designed to tell you how much performance you can expect from a processor for each watt of power drawn from the AC socket.

Unfortunately, performance per watt is something of nebulous concept, and there's no standard way of calculating such figures and no proper benchmarks to enable objective comparisons to be made.

Still, it does show a desire on the part of processor vendors to make their chips more efficient, one of the principal techniques being to move to thinner dies (the wafer in which the transistors are etched). The thinner the die the less power is required to move electrons around it, so the more efficient the processor should be and the less excess heat is likely to be generated. At least that’s the theory; however, it’s not always translated into practice.

When Intel moved from 130nm to 90nm dies, for example, the power requirements for some of its chips actually rose. Likewise it had further problems when it moved to the 65nm silicon from which its Xeon server chips are now made.

Intel's 45nm 'Penryn' processors (above), due in 2007/8, will be followed by even more power-efficient 32nm chips.


Fortunately for Intel, the company has now sorted out most of the power issues, partly by moving to a totally new architecture (known as Intel Core), enabling it to match, if not exceed, the low energy claims AMD has been making for its Opteron processors for a number of years. Indeed, Intel has been getting very bullish over energy efficiency, with Intel CEO Paul Otellini recently predicting a 310 per cent increase in performance per watt by the end of the decade. By that time, the company's Xeon server processors (or whatever they’re called by then) should be based on even thinner 32nm dies, with interim 45nm chips set to be released in 2007/8.

The multi-core dilemma
On the face of it then, all you have to do to keep the lid on your energy and cooling bills is to upgrade to newest, thinnest and hopefully, most energy efficient processors. In particular you should look for low TDP (Thermal Design Power) ratings which, although not quite a performance per watt measure, will at least tell you whether a processor has been designed with power efficiency in mind.

However, as is so often the case, it’s never that simple, with a number of complicating factors waiting to upset the equation. Such as the introduction of first dual-core then quad-core processors, with 8-core and above likely to follow very soon.

Multi-core ought to be good news when it comes to efficiency. It enables, for example, one 2-way quad-core server (effectively 8 processor cores) to replace four 2-way servers fitted with old-fashioned single-core chips. This means just one power supply and one motherboard generating heat instead of four. There's also just one system disk spinning away, one set of cooling fans and so on — leading to quite clear and measurable gains in terms of reduced power and cooling requirements.

Unfortunately, as already pointed out, demand for server processing continues to rise. So you could soon find that you’re back up to four servers, each with a pair of quad-core processors. Moreover, because a lot of legacy server applications are single threaded they are unable to take advantage of SMP (Symmetric Multi-Processing), let alone extra cores. As a result, you could end up with lots of extra processor cores consuming power and generating heat while doing very little in the way of real work.

Even if you do have the latest in multi-threaded software, you’re unlikely to be using all of the processing power all the time. Which is why it’s worth looking for additional power management features built into the chips involved.

 

Topics: Servers, Reviews

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Related Stories

The best of ZDNet, delivered

You have been successfully signed up. To sign up for more newsletters or to manage your account, visit the Newsletter Subscription Center.
Subscription failed.