X
Tech

Hot issue for green IT

Quocirca's Straight Talking: Take the heat off data centres...
Written by Quocirca , Contributor

Quocirca's Straight Talking: Take the heat off data centres...

Once you've exhausted the obvious ways of making data centres more climate friendly, what next? Quocirca's Clive Longbottom says inefficient cooling systems repay close examination.

Most organisations looking at greening their data centres have probably investigated the standard approaches of rationalisation, consolidation and more efficient use of power.

But there may be many approaches that have yet to appear on an organisation's radar. Some of these techniques can not only save money by dealing more effectively with issues such as excess heat but can also reuse this heat elsewhere in the organisation.

Let's start with the optimum air flow in a data centre. In most data centres, air is cooled and then passed through the building only to be exhausted at only a few degrees above its entry temperature.

Additional fans can spot-cool high-temperature components but this is very wasteful because it means most of the air is only cooling the volume of the data centre, rather than the equipment within it. Indeed spot-cooling is often recooling the air that has just been used to cool the equipment and has then recirculated back into the room.

Far better to go for a more targeted approach using cold and hot aisles. Here, the data centre is designed as sets of engineered racks that are environmentally contained. Each cold aisle has a closed ceiling, stopping any air from escaping over the top of the racks. Any gaps within the racks are filled with blanks.

Air flows are controlled and directed through the racks to maximise cooling using a mix of laminar and turbulent air flows. The hot air leaving the back of the racks is then exhausted through the hot aisles, which are open to the roof space.

This approach reduces the volume of air needing to be cooled and maximises its cooling effect. Because the hot aisles are not remixing the hot air back into the data centre itself, the air can be exhausted at a far higher temperature and can be used for space heating elsewhere in the organisation.

Water-based radiator cooling

With the onward march of utility computing, hot components such as CPUs, disks and power supplies have become more densely packed and standard cooling approaches are beginning to hit their limits.

Fans can only be so effective and heat exchange fins can only get so large before they start interfering with the design of a blade computer itself or the chassis that encloses it.

Many have revisited the idea of water cooling but this is very expensive and can lead to major issues in engineering and design of a system that fits in with the basic principles of the utility computing theory.

IBM has come up with a different approach, with a water-cooled radiator that can be retro-fitted to computer racks and blade chassis. When combined with the hot-cold aisle approach described earlier, the hot air output into the hot aisles transfers heat via the radiator into the water - so cooling the exhaust air, lowering the amount of recooling that's required or reducing the heat exhausted to the open air.

The heated water can then be used via high efficiency heat transfer systems to provide warm water to other parts of the organisation.

Phase-change heat stores

Basic chemistry shows heat is required to change a solid's state to a liquid one and heat is given out when a liquid solidifies. The similar function of a liquid changing to a gas and back again is the basis of a refrigerator and is also how a basic air conditioner's chilling unit works.

This process can also be used to take heat from a data centre and store it for use elsewhere or at a different time. For example, by using a solid that has a liquid transformation state of about 30 degrees centigrade, the hot air from a hot aisle or the hot water from a radiator system can be utilised to change the state of the solid to a liquid, so taking the heat with it in a very effective manner.

This liquid can then be stored in well-insulated tanks until heat is required elsewhere within the organisation, at which point the liquid is allowed to return to its solid state thus giving up its heat. Such phase change systems are highly efficient and require little power input other than basic pumps to maintain the system.

Running data centres at higher temperatures

Finally, the vast majority of data centres are run at temperatures that are more suited to keeping humans cool, rather than at temperatures that fit the working heat profile of components inside the data centre itself.

For example, a CPU can run reasonably happily with a surface temperature of around 60 degrees centigrade. Some are capable of running at up to 100 degrees. Good quality power supplies and disk drives can also run hotter than the temperatures at which we try to maintain them.

So it is very wasteful to attempt to provide masses of cool air for the full volume of a data centre to cool individual components down to between 35 and 40 degrees centigrade.

Increasingly, as the move to utility computing proceeds, with commodity components costing hundreds rather than thousands of dollars, running a data centre at higher temperatures than generally accepted may mean we suffer higher rates of component failure.

But a utility-based approach will minimise downtime, allowing failed components to be plugged in and out as required and integrated and provisioned by the system on the fly. However, dehumidified air will still be required, as warmer air can carry more moisture - which is very bad news for electrical equipment.

Data centre design is becoming increasingly important and, where possible, the greenest data centres will be based on new builds. However, the approaches described can be retro-fitted in many of today's centres and can provide direct energy savings in the data centre and for the organisation through the use of heat reutilisation.

Editorial standards