X
Tech

Psssst. Google's data center efficiency secrets. Pass them on.

First off, I'm going to say that there probably are a limited number of e-businesses in the world that rely on data centers as much as Google does. So, while the best practices that it has to share regarding its data center design and technology policy might not yield the same scale of results in your own organization, they are nonetheless worth considering if for no other reason than they might give you at least one idea for solving your own smaller-scale problems.
Written by Heather Clancy, Contributor

First off, I'm going to say that there probably are a limited number of e-businesses in the world that rely on data centers as much as Google does. So, while the best practices that it has to share regarding its data center design and technology policy might not yield the same scale of results in your own organization, they are nonetheless worth considering if for no other reason than they might give you at least one idea for solving your own smaller-scale problems.

The search engine company actually began publishing PUE numbers for its facilities earlier this year at this link. PUE measures the ratio between the power consumed by your technology infrastructure and the power needed for the overhead of cooling, and distributing electricity across the facility.

According to Christopher Malone, a thermal technologies architect in the Data Center R&D division of Google, the typical corporate data center with 5 megawatts of IT load has a PUE of 2.0 (that means for every watt of IT power used, you consume one watt in overhead). Google, however, has managed to get its numbers down to a level substantially below this. I'll share their number in a sec, but first, here are some of Malone's best practices, as shared during a keynote address this week during a keynote at Uptime Institute's Lean, Clean & Green IT Symposium.

In Malone's mind, the two biggest components that really matter in shrinking your PUE number are the losses associate with inefficient cooling system design and the losses associated with distributing the power throughout your facility. Naturally, those are the two areas where changes have made the biggest impact in Google's own data centers. Google's design has helped the company realize an 85 percent reduction in cooling energy consumption, compared with a typical corporate data center; and when it comes to power distribution, Google uses about 82 percent less electricity than typical.

Malone said one big reason Google has been able to achieve this remarkable reduction was its move to include an on-board UPS module for every server tray that is 99.9 percent efficient. Each module has just enough back-up power to keep the server running until a generator can kick in during the event of an outage. The design also eliminates an unnecessary double conversion of power coming into the datacenter (AC-DC-AC).

The second big thing Google did was improve the way its cold aisles and hot aisles are located throughout its facilities: power comes in the top, water is located on the bottom and the hot air comes out the back of the server aisles. It has raised the cold-aisle temperatures in its facilities in order to eliminate unnecessary cooling.

Another thing: Google is moving toward continuous measurement of its PUE numbers, so that it can keep track of whether there are any aberrations that aren't seasonal in nature.

As of March 15, here are some of Google's numbers:

  • Quarterly energy-weighted average PUE (across all five measured facilities) = 1.15
  • Trailing twelve-month energy-weighted average = 1.19
  • Individual facility minimum PUE = 1.12

Here's the complete skinny on Google's data center metrics. The company will update the site quarterly, so check back for new data and tips.

Editorial standards