X
Business

HP's cool dynamic datacenter cooler

HP’s latest attack on datacenter inefficiency is the over-provisioning of cooling. It's not that datacenters needlessly bulk up on the computer room air conditioning units (CRAC) strategically positioned to push cool air through raised floors and taking out the hot air as it rises to the ceiling.
Written by Dan Farber, Inactive

HP’s latest attack on datacenter inefficiency is the over-provisioning of cooling. It's not that datacenters needlessly bulk up on the computer room air conditioning units (CRAC) strategically positioned to push cool air through raised floors and taking out the hot air as it rises to the ceiling. The problem is that the cooling units, whether liquid or vapor, are typically running at a fixed rate no matter what the operational conditions, which can be highly inefficient and costly.

Speaking at a press event at HP Labs in Palo Alto today, Paul Perez, HP vice president of storage, networks &  infrastructure, said that 40 percent of datacenter operational spend goes into power consumption, especially with greater server densities in blade environments, and at least 60 percent of that power spend is on cooling.

HP plans to introduce a technology solution, Dynamic Smart Cooling, in the summer of 2007 that the company claims will offer 20- to 45-percent energy cost savings, which could add up to more than $1 million per data center per year, depending on energy costs. 

sensor220.jpg
HP has been deploying the technology in its own datacenters, validating the concept before formally turning it into a product, according to Pete Karolczak, HP vice president of infrastructure and operations. The Dynamic Smart Cooling is simple in concept. Attach temperature sensors (at left) at the inlet of all server racks, aggregate the sensor data and dynamically control the air flow and temperature of the CRAC systems to reach optimal efficiency. For example, if a rack of servers is not being used, blowing cool air all over the enclosures isn't necessary.

Chandrakant D. Patel, an HP Fellow and member of the team that developed the Dynamic Smart Cooling system, said that the  technology modulates temperature and flow so provisioning cooling is based on need and continually changes the speed of blower and air supply and the temperature. He and co-authors Cullen E. Bash and Ratnesh K. Sharma explain the concept in a technical paper, "Dynamic Thermal Management of Air Cooled Data Centers": 

The sensor network is attached to standard racks and provides  direct measurement of the environment in close proximity to the computational resources. A calibration routine is used to characterize the response of each sensor in the network to individual CRAC actuators. A cascaded control algorithm is used to evaluate the data from the  sensor network and manipulate supply air temperature and flow rate from individual CRACs to ensure thermal management while reducing operational expense. 

The secret sauce, according to Patel, is the computational fluid dynamics that set up the relationships between racks and CRAC units and algorithm that dynamically controls the flow and temperate of the air. "The flow is quite complex," Patel told me. "We do computer fluid dynamics models that take hours to figure out the optimal flow. What has been missing is taking thousands of sensors and managing the actuators [CRAC units]."

 
cracregions.jpg

Extending from each CRAC are bubbles indicating the extent of influence each CRAC has over equipment placed in the room, illustrated by the larger rectangles. The shape of the regions defined by the bubbles are governed by the  plant function of the system and are primarily influenced by data center geometry, layout, and CRAC flow rate with secondary dependencies on rack-level flow rate. Source: "Dynamic Thermal Management of Air Cooled Data Centers"

In the HP Labs datacenter in Palo Alto, six CRAC units are serving 1000 servers, consuming 270 KW of power out of a total capacity of 600 KW. In the conventional mode, running in single temperature, the cooling power consumes 117 KW, Patel said, and running in the dynamic mode only 72 KW.

At the press conference, Peter Gross, CEO and CTO of EYP Mission Critical Facilities, Inc., which has designed over 20 million square feet of raise floors, called Dynamic Smart Cooling "the most significant development in infrastructure support system for datacenters in last five years."

He cited resolving the traditional conflict between conserving energy and improving reliability as one issue that Dynamic Smart Cooling addresses. "In a world where high-density blades, virtualization and high concern for energy conservation are the major trends, there is always a conflict around conserving energy and improving reliability. Adding redundancy, such as power supplies, increases energy consumption, for example. Dynamic Smart Cooling brings the two components together," Gross said.

He also said that HP's cooling technology addresses the Holy Grail of the industry--finding a way to bring together processing, networks and facilities in an optimized and integrated fashion. "Traditional datacenters provision power and cooling to respond processing power objectives and hope the capacity and demand will match," Gross said. "Maybe they match for five minutes and then become unbalanced again."

The HP execs claimed that the Dynamic Smart Cooling can reduce by cooling power consumption by 40 percent in smaller datacenters, by 30 percent in medium-sized datacenters (around 30,000 square feet), and by about 20 percent in large facilities.


 
chandra.jpg

Chandrakant D. Patel, an HP Fellow and member of the team that developed the Dynamic Smart Cooling system
Dynamic Smart Cooling will work with third-party equipment, Perez said, and HP is partnering with architecture and engineering firms that spec datacenters as well as mechanical contractors, service providers, real estate specialists and software companies to drive the the adoption of energy efficient datacenters. In addition, Dynamic Smart Cooling doesn't require any special tricks to interface and regulate the blowers and air temperature of CRAC systems, Patel said.

Mark Bramfitt, high tech segment lead for customer energy efficiency at Pacific Gas & Electric, said his company will reward customers and also profit on delivering more energy efficiency in datacenters.

Pricing will be a function of energy savings rendered to customers, Perez said, with a 6 to 9 month break even time frame as the target for what customers will pay. In addition, maintenance and service fees will be charged, as well as a new "membership" fee per new rack installed, Perez said. HP offers a smart cooling assessment service for modeling thermal conditions in a data center. 

"Within the next six months, we will have a rolling thunder of success stories," Perez promised. He said that HP plans to go after large customers with tens or hundreds of datacenters next year, and toward 2009 will target the thousands of customers with one or two datacenters. HP is also banking that many datacenter are nearing end of life, and in combination with the build out of new, greenfield data centers the market opportunity is huge. 

Given that HP's customers can model their datacenters to project the level of cost savings from deploying Dynamic Smart Cooling, and HP's primary revenue source is based on delivering cost savings, the sensor and cooling management technology should generate a lot of interest and gain converts.

Editorial standards