Moore's Law can't stand the heat

Moore's Law can't stand the heat

Summary: In this special report, we look at why many datacentres today are facing a power and cooling crisis.

SHARE:

Over the past few years, the amount of electricity required to power a server in a datacentre has more than doubled. So although buying the server costs almost 20 percent less than it did two years ago, that server costs significantly more to run.

This increase in operating costs is because the latest servers are stuffed with more computing horsepower than ever before. Even though the chip manufacturers are creating more efficient processors, technologies such as virtualisation are causing these chips to work harder and for longer periods of time.

The hidden cost here is that, with more transistors in each processor and more processors (and processing cores) in each server, the datacentre rack generates far more heat than it used to, which means traditional datacentre cooling systems can no longer keep up.

The problem is so bad that analyst firm Gartner expects to see more money being spent this year on power and cooling technologies than is spent on the actual servers themselves. In two years time, Gartner expects around half of datacentres to have insufficient power or cooling capacity to meet demand.

The cost of powering and cooling a server is currently 1.5 times the cost of purchasing the actual server itself, according to The Uptime Institute (TUI), which is a think tank dedicated to improving datacentre reliability.

In a white paper titled Data Centre Energy Efficiency and Productivity, Kenneth Brill, founder and executive director of TUI, said that if current trends continue till 2012, best-case estimates show that powering and cooling a server will cost three times as much as purchasing the hardware. The worse case scenario is that power and cooling will cost 22 times more than the hardware.

This prediction has led the TUI to go as far as saying the benefits gained from Moore's Law -- which states that the number of transistors crammed into silicon will double every 24 months -- will be nullified by the increasing cost of powering and cooling those processors.

In a recent article for CIO.com, Brill argued that Moore's Law can no longer be seen as a "good predictor of IT productivity because rising facility costs have fundamentally changed the economics of running a data centre".

Brill gives an example where one of TUI's members spent US$22 million buying new blade servers but then had to fork out US$54 million to boost power and cooling capacity. This increased the return on investment figure from US$22 million to US$76 million.

He warns CIOs that their "facilities and infrastructure" costs, which currently account for between one and three percent of the IT budget, are likely to shoot up to between five and 15 percent within a few years.

"That is enough for the CEO and CFO to begin scrutinising how the IT budget is being spent," he said.

Bill Clifford, chief executive of datacentre management software firm Aperture, believes there is an impending crisis: "Many users are simply unaware of the dangers they are introducing to their datacentres".

Clifford said that in the past, cooling issues could be resolved in a matter of weeks as suppliers of cooling equipment could cope with demand. But with the increased demand, the lead time for new equipment can be up to 18 months, putting firms at risk of experiencing datacentre downtime.

Virtualisation is the silent enemy
Virtualisation has been touted as a technology that can be used to reduce power consumption because it allows computing tasks to be consolidated to fewer servers. Unfortunately, reducing the number of physical servers by increasing the workload on the remaining servers can result in increased power consumption.

According to Kris Kumar, MD of Sydney-based datacentre design specialists 3iGroup, virtualisation is the "silent enemy" because once you virtualise a server, it is likely to run at about 80 percent of its capacity. Before virtualisation, that server was running at about 15-20 percent of capacity -- and therefore using less power.

"In real power terms, a 300 watt server which was running at 20 watts is actually now running at 280 watts. You are reducing the footprint but putting in more processing power, so the power per footprint has gone up," Kumar told ZDNet Australia.

Tim Ferguson from silicon.com contributed to this report

Topics: IBM, CXO, Data Centers, Emerging Tech, IT Priorities

Munir Kotadia

About Munir Kotadia

Munir first became involved with online publishing in 1998 when he joined ZDNet UK and later moved into print publishing as Chief Reporter for IT Week, part of ZDNet UK, a weekly trade newspaper targeted at Enterprise IT managers. He later moved back into online publishing as Senior News Reporter for ZDNet UK.

Munir was recognised as Australia's Best Technology Columnist at the 5th Annual Sun Microsystems IT Journalism Awards 2007. In the previous year he was named Best News Journalist at the Consensus IT Writers Awards.

He no longer uses his Commodore 64.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

6 comments
Log in or register to join the discussion
  • Heat Issues

    Virtualization itself is not the issue. Datacentres charge a massive premium for their space, and so customers are 100% right to want to maximise their use of that space. Not to do so would be financially irresponsible.

    The issue is that the billing models of the datacentres don't account for use of electricity, and so they have been the ones to absorb he cost increases. The solution would be for datacentres to charge customers for the power used, just like they charge for bandwidth.

    The moment this happens, you will see customers being more careful with their system designs. If the change becomes universal, you will also see hardware manufacturers taking power usage and thermal design into account when designing new products (although this has already started, anyway).
    anonymous
  • Re: Heat Issues

    Well said.

    Whilst charging per KwH would hurt the wallet, it's good for the planet.
    anonymous
  • Re: Power consumption

    Consider a "300 watt" 2 socket 4 core rack mountable server. You can easily consolidate 16 distinct systems onto it by using VMs with VMware ESX Server. Let's then say that it uses 80% of available capacity, and thus "280 watts" given the argument by Kris Kumar in the article. Well, if you consider his 16 distinct boxes using 20 watts each, that's 320 watts usage. So even if the gain isn't significant you still save. And you have eaten up a rack and half of space in your data center.

    I'd have to see more data from this Mr Kumar before saying anything he's mentioned here is accurate. Silent killer? Silent savior is more like it.
    anonymous
  • Silent Killer? Silent Assassin

    When you consolidate old servers that don't have demand based power switching, you are reducing the overall power. A 3 year old server that is rated at 300 watts draws 300 watts regardless of it's capacity utilisation. New servers which virtualisation occurs have demand based switching. If you consolidate 10 of these old servers onto 1, you would be moving from 3000 watts to 300 watts. I think Mr Kumar has a secret agenda... perhaps more of an assassin role.
    anonymous
  • virtualisation and power

    The comments here seem to be missing the point! Virtualisation is a great advance in technology but the reality is that when the footprint of servers is reduced by vistualising them, server sales dont go down proportionately. Companies will contnue to adopt servers, even if they are virtualised, thereby increasing the power/sqm of footprint, so a 2-3 KW rack will eventually draw 7 -10 KW as a result of "doing more with less" and installing more virtualised servers into lesser physical space. Logic will tell you that the power and heat will thus increase in the data centre eventually and is happening already! Why do you think there is a power and cooling crisis in data centres around the world? Virtualisation and Consolidation, the adoption of blade technology, and bigger storage requirements are the main reasons for this. So dont be fooled that virtualisation will solve power and cooling issues because as long as porcessing speeds get faster and applications drive the demand for processing capability, virtualised or not, power and cooling demands will grow and the data centre will come under pressure.
    anonymous
  • Virtualisation and Power

    Your comments hit the nail on the head.

    Its not today's power demands that are the issue. It will be future power requirements initiated by the need for increased storage capacity, greater processing capability and infrastructure cooling that will see power demand escalate.

    From an IT perspective, I believe it is time that we all learnt to do more with less!!!
    anonymous