Turning up the heat to cool down

Turning up the heat to cool down

Summary: In my last blog post I touched on the need to create energy-efficient data centres to ensure cloud computing lives up to its true potential. This time around I’d like to focus on a specific method for improving efficiency through high temperature ambient (HTA) data centres.


In my last blog post I touched on the need to create energy-efficient data centres to ensure cloud computing lives up to its true potential. This time around I’d like to focus on a specific method for improving efficiency through high temperature ambient (HTA) data centres.

However, before I get rolling, I’d like to put the need for greater efficiency in context. First up, data centres are estimated to consume 1.5 percent of total world power and that consumption continues to increase. In concrete terms, that’s the equivalent of 50 power stations each year. They also generate 210 million metric tons of CO2, roughly the same as 41 million cars.

A considerable proportion of the energy consumed by data centres generates heat, which under the traditional view of facility management poses problems for the reliable running of servers. Consequently, data centres operators have cooled their facilities to between 18-21°C to try and keep the IT equipment cool. Ironically this cooling process consumes a considerable portion of the overall energy demands in a given facility.

We’re in this position for a number of reasons, including the fact that traditionally IT equipment manufacturers have specified their systems to operate at 20-25°C. One of the main reasons for this was to ensure reliable operation of the IT equipment. However operating a data centre at a higher ambient temperature, and using natural cooling facilities such as air, can result in reduced energy consumption and in turn lower annual CO2 emissions.

One commonly used metric to quantify data centre energy efficiency is The Green Grid’s Power Usage Effectiveness (PUE), the ratio of inbound power to a data centre versus the power that is actually used by the IT equipment.

A PUE of 2 for example, will mean that only 50% of the power used by a data centre is actually consumed by the IT equipment – the rest is consumed by the facilities equipment cooling the data centre. This is clearly inefficient.

Working to reduce the PUE so it is closer to 1.0 results in more of the inbound power actually being used by the IT equipment and less used in cooling the data centre. This can result in direct financial savings to the data centre operator.

There are lots of ways of lowering a data centre’s PUE but raising the ambient data centre temperature can have a striking impact. Facebook, for example, retooled its Santa Clara data centre to 27°C from the average 18-21°C. Its annual energy bill correspondingly fell by $229,000, earning it a $294,761 energy rebate.

Intel’s IT department has undertaken some work with a data centre in New Mexico to evaluate the value of high ambient temperatures and the use of natural cooling resources. The data centre had 900 production servers and 100 percent air exchange at 33°C. It delivered an estimated 67 percent power savings when compared to the average 18-21°C. This translated into approximately $2.87 million savings on the cost of power. There was no humidity control and minimal air filtration.

Another example of using HTA within a data centre is the Yahoo Computing Co-op which developed a data centre that operates no chillers and requires water for only a few days a year. Its estimated PUE is 1.08. It relies on 100 per cent natural air flow which means less than 1 percent of the building’s total energy consumption is used for cooling.

Of course, there are many elements that go into producing a truly energy-efficient data centre, which in turn delivers true cloud-computing benefits. These range from increasingly powerful but more energy-efficient processors to server platform innovation. But the fact can’t be escaped that by raising ambient temperatures, and using natural cooling resources, the energy-efficiency of data centres can be upped dramatically.

Topic: Cloud

Alan Priestley

About Alan Priestley

I'm a multi-year Intel veteran, and currently hold the role of Strategic Marketing Director within EMEA.

My time with Intel began with a role supporting all the PC design accounts in the UK - back in the days when the i286 was the latest and greatest processor on the Intel roadmap. Since then, I've moved through various technical and product marketing roles, including being responsible for launching the Xeon processor product line in EMEA and managing the Itanium program office.

At present, I'm responsible for Intel's high-end server business and Cloud Marketing strategy in EMEA. This puts me at the hub of major developments in both server technology, and the cloud ecosystem it's powering. I'm now very involved with the Intel Cloud Builders programme.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


1 comment
Log in or register to join the discussion
  • Certainly the analysis I did on Cloud Computing Energy Efficiency for Pike Research confirms your point that many elements go into producing a truly energy-efficient data centre. Raising the operating specifications on the equipment is certainly one of the most significant changes, although in my mind it is not as significant as virtualization since, at best, raising temperatures can only lead to a 50% improvement in efficiency. Interestingly the more widely HTA is adopted the less meaningful PUE becomes. Hopefully we will all see a day when most data centers are cooled with ambient air and PUE becomes an obsolete measurement.
    Bruce Daley