X
Home & Office

How low can a datacentre get?

The datacentre is a hot area right now -- mainly because of cooling. The key here -- as you won't be surprised to learn -- is that cooling costs are rocketing so there's a case to be made for investing in smarter ways of cooling them.
Written by Manek Dubash, Contributor

The datacentre is a hot area right now -- mainly because of cooling. The key here -- as you won't be surprised to learn -- is that cooling costs are rocketing so there's a case to be made for investing in smarter ways of cooling them.

I've seen or learned of a few of these in recent months. Most datacentres you walk into these days are designed using the traditional cold aisle, hot aisle model. This means that the racks containing servers sit on raised floors composed of tiles through which holes have been punched to allow ingress of cold air. This cold aisle runs along the front of the machines. The servers suck in the cold air, and blast it out of the back as warm air into the adjoining, parallel aisle. Air conditioning ducts suck up the warm air, condition it into cold air, and the cycle begins again.

One of the main problems with this design is that there's no separation between the hot and cold aisles, which allows the two airflows to mix and making the arrangement much less efficient than it could be. The obvious thing to do it to separate the two aisles completely, and that's what a lot of new datacentre designs have done. They've added partitions so that there can be no mixing of fluids (that's what air is of course) between the two aisles.

Smarter systems add sensors to measure the temperature difference between the two aisles and meter the volume of air input according to the actual needs of the system. If cooling needs decrease -- perhaps some of the servers slow or even shut down overnight -- then the air delivery system can afford to reduce its efforts, so saving energy.

For example, Bladeroom's latest design uses ambient air cooling which they company reckons will work unsupplemented by refrigeration in all but the most extreme of environments. And even in hot and humid North Carolina, storage vendor NetApp's latest datacentre works using ambient for 67 percent of the time, according to the company.

Both of these designs add a couple of other wrinkles to traditional designs. First, they both run warmer than normal, at between 22 and 24 degrees C. Bladeroom reckons this works fine and provides plenty of overhead in the event of cooling failure, because servers are rated to run at up to 35 degrees.

Bladeroom's system is also modular, a design also adopted by Colt. It's not like a datacentre-in-a-box, the idea pioneered by Sun a few years back. Instead, modularity removes one of the barriers to acquisition of a datacentre by allowing enterprises and service providers to buy what they need now and add modules as the business grows. Up to now, you had to buy a whole big space and all the equipment to run and cool it, and hope the servers arrived quickly to full it up in order to cost-justify the expense.

The result is that it's now getting easier to buy a datacentre, something that's becoming more urgent as IT centralises its operations and needs a space to house all those virtualised servers and desktops. What's more, ambient cooling means cheaper running costs: NetApp reckons it achieves a PUE of 1.21 while Bladeroom gets it down to 1.13. That means ancillary equipment such as the cooling system adds only 13 percent to the power bill resulting from running the servers, storage and networking gear.

It's a big step forward from the old rule of thumb where you could reckon to spend the same amount again as your servers etc cost to run -- a PUE of 2.

How low can it go? I suspect there's more to come.

Editorial standards