HP's cool dynamic datacenter cooler

HP's cool dynamic datacenter cooler

Summary: HP’s latest attack on datacenter inefficiency is the over-provisioning of cooling. It's not that datacenters needlessly bulk up on the computer room air conditioning units (CRAC) strategically positioned to push cool air through raised floors and taking out the hot air as it rises to the ceiling.

SHARE:
TOPICS: Hewlett-Packard
8

HP’s latest attack on datacenter inefficiency is the over-provisioning of cooling. It's not that datacenters needlessly bulk up on the computer room air conditioning units (CRAC) strategically positioned to push cool air through raised floors and taking out the hot air as it rises to the ceiling. The problem is that the cooling units, whether liquid or vapor, are typically running at a fixed rate no matter what the operational conditions, which can be highly inefficient and costly.

Speaking at a press event at HP Labs in Palo Alto today, Paul Perez, HP vice president of storage, networks &  infrastructure, said that 40 percent of datacenter operational spend goes into power consumption, especially with greater server densities in blade environments, and at least 60 percent of that power spend is on cooling.

HP plans to introduce a technology solution, Dynamic Smart Cooling, in the summer of 2007 that the company claims will offer 20- to 45-percent energy cost savings, which could add up to more than $1 million per data center per year, depending on energy costs. 

HP has been deploying the technology in its own datacenters, validating the concept before formally turning it into a product, according to Pete Karolczak, HP vice president of infrastructure and operations. The Dynamic Smart Cooling is simple in concept. Attach temperature sensors (at left) at the inlet of all server racks, aggregate the sensor data and dynamically control the air flow and temperature of the CRAC systems to reach optimal efficiency. For example, if a rack of servers is not being used, blowing cool air all over the enclosures isn't necessary.

Chandrakant D. Patel, an HP Fellow and member of the team that developed the Dynamic Smart Cooling system, said that the  technology modulates temperature and flow so provisioning cooling is based on need and continually changes the speed of blower and air supply and the temperature. He and co-authors Cullen E. Bash and Ratnesh K. Sharma explain the concept in a technical paper, "Dynamic Thermal Management of Air Cooled Data Centers": 

The sensor network is attached to standard racks and provides  direct measurement of the environment in close proximity to the computational resources. A calibration routine is used to characterize the response of each sensor in the network to individual CRAC actuators. A cascaded control algorithm is used to evaluate the data from the  sensor network and manipulate supply air temperature and flow rate from individual CRACs to ensure thermal management while reducing operational expense. 

The secret sauce, according to Patel, is the computational fluid dynamics that set up the relationships between racks and CRAC units and algorithm that dynamically controls the flow and temperate of the air. "The flow is quite complex," Patel told me. "We do computer fluid dynamics models that take hours to figure out the optimal flow. What has been missing is taking thousands of sensors and managing the actuators [CRAC units]."

 
Extending from each CRAC are bubbles indicating the extent of influence each CRAC has over equipment placed in the room, illustrated by the larger rectangles. The shape of the regions defined by the bubbles are governed by the  plant function of the system and are primarily influenced by data center geometry, layout, and CRAC flow rate with secondary dependencies on rack-level flow rate. Source: "Dynamic Thermal Management of Air Cooled Data Centers"

In the HP Labs datacenter in Palo Alto, six CRAC units are serving 1000 servers, consuming 270 KW of power out of a total capacity of 600 KW. In the conventional mode, running in single temperature, the cooling power consumes 117 KW, Patel said, and running in the dynamic mode only 72 KW.

At the press conference, Peter Gross, CEO and CTO of EYP Mission Critical Facilities, Inc., which has designed over 20 million square feet of raise floors, called Dynamic Smart Cooling "the most significant development in infrastructure support system for datacenters in last five years."

He cited resolving the traditional conflict between conserving energy and improving reliability as one issue that Dynamic Smart Cooling addresses. "In a world where high-density blades, virtualization and high concern for energy conservation are the major trends, there is always a conflict around conserving energy and improving reliability. Adding redundancy, such as power supplies, increases energy consumption, for example. Dynamic Smart Cooling brings the two components together," Gross said.

He also said that HP's cooling technology addresses the Holy Grail of the industry--finding a way to bring together processing, networks and facilities in an optimized and integrated fashion. "Traditional datacenters provision power and cooling to respond processing power objectives and hope the capacity and demand will match," Gross said. "Maybe they match for five minutes and then become unbalanced again."

The HP execs claimed that the Dynamic Smart Cooling can reduce by cooling power consumption by 40 percent in smaller datacenters, by 30 percent in medium-sized datacenters (around 30,000 square feet), and by about 20 percent in large facilities.


 
Chandrakant D. Patel, an HP Fellow and member of the team that developed the Dynamic Smart Cooling system

Dynamic Smart Cooling will work with third-party equipment, Perez said, and HP is partnering with architecture and engineering firms that spec datacenters as well as mechanical contractors, service providers, real estate specialists and software companies to drive the the adoption of energy efficient datacenters. In addition, Dynamic Smart Cooling doesn't require any special tricks to interface and regulate the blowers and air temperature of CRAC systems, Patel said.

Mark Bramfitt, high tech segment lead for customer energy efficiency at Pacific Gas & Electric, said his company will reward customers and also profit on delivering more energy efficiency in datacenters.

Pricing will be a function of energy savings rendered to customers, Perez said, with a 6 to 9 month break even time frame as the target for what customers will pay. In addition, maintenance and service fees will be charged, as well as a new "membership" fee per new rack installed, Perez said. HP offers a smart cooling assessment service for modeling thermal conditions in a data center. 

"Within the next six months, we will have a rolling thunder of success stories," Perez promised. He said that HP plans to go after large customers with tens or hundreds of datacenters next year, and toward 2009 will target the thousands of customers with one or two datacenters. HP is also banking that many datacenter are nearing end of life, and in combination with the build out of new, greenfield data centers the market opportunity is huge. 

Given that HP's customers can model their datacenters to project the level of cost savings from deploying Dynamic Smart Cooling, and HP's primary revenue source is based on delivering cost savings, the sensor and cooling management technology should generate a lot of interest and gain converts.

Topic: Hewlett-Packard

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

8 comments
Log in or register to join the discussion
  • save 2 Millions if you are clever

    ... if you would invest into SSD's you could save on the number of
    servers too!! something HP, IBM and SUN are not going to tell you!

    smart IT folks are aware of this one:
    http://www.storagesearch.com/ssdarticles.html
    Gene(ius):)
    • heat

      And last I checked, SSDs produced a whole lot less heat than HDs; so maybe you could cut down on just how much cooling you need anyway.
      m88k
  • I had no idea people paid so little attention to cooling

    Is it really so revolutionary that we just put temperature sensors on the racks, and then use that data to control the cooling? I admit I haven't spent much time in data centers in the last few years, and I haven't paid much attention to cooling, but I kinda figured we were already doing something this basic, and still had cooling issues. I'm glad HP is coming out with this, but all I really have to say about it is: Duh.
    SBArbeit
  • 60% of Power for Cooling in DataCenter???

    The whole point of heat pumps for air conditioning is that you don't have to expend 1 watt of power to remove 1 watt of heat from an environment. The statement that "60% of power used in a datacenter is needed to cool a datacenter" implies that you actually need more then 1 watt of power to run an air conditioning system that removes 1 watt of heat. If you needed exactly 1 watt of power to drive 1 watt of heat removal in your datacenter then only 50% of power would go to cooling. In reality air conditioning systems are much more efficient then this - I know the system in my house is. Something is very fishy here or some poor company has implemented some of the worst air conditioning systems on the planet. What's up with this?
    MikeLF
    • 60% figure

      I've done a little bit of calculation, in response to some data I got from a meeting with Rackable Systems (another vendor that thinks about power), and what I got was that 60% of power is quite realizable as the energy wasted in delivering computing services, taking into account all inefficiencies upstream of the actual computing components. (You can read about it at http://www.enkiconsulting.net/articles/blog/data-center-power-consumption---a-hot-topic.php)
      Granted, this isn't quite the same thing as the energy used to run the cooling, though it includes it. Certainly air conditioners are more efficient than that, but when you include all the fans (in the CRAC, the ventilation system, the rack, the equipment cases, and on the circuit boards) the number starts to climb. Getting the cool air to the hot components takes more energy than it intuitively seems it might.
      enovikoff
  • It's so SIMPLE

    The concept is so simple that a simple 13 year old like me found it DUMB that no one had thought of it before! It seems strange that it took large firms so many years to come up with this system of data center cooling. Considering the expenditure on cooling solutions, it is a godsend for many such centers.
    Vivek Nair
    • Yeah, simple but uninteresting until now.

      I think it was not implemented before (or considered a problem before) because energy saving was not a priority. But today, with energy price raising, ecological concerns and aggresive global competition in the hardware field, now we are paying more attention to this little (or not so little) things.

      Maybe it was too simple and too obvious, we thought it was already being done...


      Regards,
      MV
      MV_z
  • Not so Smart - this "technology" has been around for years

    HP is simply not telling the whole story.

    "Traditional datacenters provision power and cooling to respond processing power objectives and hope the capacity and demand will match" - that is true, Mr. Patel. But where have you been the last decade?
    There is a company out there that has invested in this area, APC (American Power Conversion) and has already brought to market a lot of solutions that exactly solve all the problems HP describes. Dynamic Smart Cooling has been part of the APC portfolio for more than a year (they call it kW-metering). They also have brought to market UPSs which are scalable, modular and redundant (the Symmetra PX and Symmetra MW-range, which are scalable UPSs that vary from 10kVA to 1.6 MW) and consequently save you money on your energy-bill because you get more efficiency from your UPS.

    To place temperature-sensors in front of the racks (and hopefully also at the back) and so regulate the airflow and temperature of the CRAC-units, is a practice already adopted in a lot of datacenters and has also been part of the product-portfolio of APC (and other vendors) for more than a year now (since November 2005 !). I know a lot of customers already using this ??Dynamic Smart Cooling? technology (or whatever you want to call it), and I can assure you it is not HP?s.

    I would even dare to say that APC already have improved this ?technology? by adding the actual power draws of the servers themselves (they can do this through their RackPDUs which can easily measure kW (or Ampere-levels)). APC also has a central system (InfraStruXure Manager) where all of the data gathered by the CRAC, temp-sensors, UPS, PDUs, etc is being used to dynamically control air and temperature delivered to the racks.

    Pretending this new technology is the ?Holy Grail? just cracks me up, moreover, it is just NOT true this will save millions in energy consumption if you keep using traditional equipment. There are too many factors HP can not control to reduce costs of cooling (and by extent power). Many CRAC-units out there simply do not have controllable speed-fans, so how do you even begin regulating airflow? Many DCs out there have these ?great? raised floor to cool their DCs but many of those are stuffed with cables (network and power), so regulating the CRACs without measuring airflow at the rack-level (or tile-level) is just a major flaw in this ?technology?. More-over, the laws of physics just limit the amount of air which can flow through a datacenter floor-tile (you are typically looking at about 3-4 kW, beyond that it gets difficult). Also the arrangement of the perforated tiles is critical in regulating flow and temperature delivered to rack. How will this ?technology? open or close tiles ?

    I would highly recommend Mr. Patel to start reading some of the great white-papers (also available through ZDNET) from APC and Liebert and also start looking at some of the products of these companies already installed at HP?s customers.
    ben.declercq