Over the last few years I’ve been inside more than a few datacenters. From ultra-slick new construction to older well-established facilities and everything in between just about all the facilities share many things in common. While features added to older datacenters may have a certain jerry-rigged look to them it’s always interesting to see how something that started as an ad hoc measure in an older datacenter are now standard features in newer facilities.
But one feature that’s common to almost every datacenter facility is the need to manage the flow of air. Be it a large open room, hot-air containment, cold aisles, or any of the many techniques for minimizing cooling needs while maximizing IT load, managing the flow of air plays a major role.
That’s why I’m always surprised to see how rarely blanking panels are used in active datacenters. More than once I’ve walked through a server room that was filled with eddies of extremely hot air and localized pools of cold air. Almost always, there is some obvious reason for these temperature zones and quite often it is clearly visible; racks that have been reconfigured and are now empty in sections, completely empty racks as applications and hardware have been migrated and consolidated, new equipment that took up less (or more) space, ad infinitum.
I always ask about the open rack spaces and dead air zones and the answer is always that it’s a temporary situation. But further investigation tends to lead to the information that “temporary”, in many cases, means weeks or months. And all of that time is spent with whatever cooling mechanism that is in place needing to work harder and consume additional energy.
When planning the energy budget for the datacenter, the efficiency of the temperature management is a critical component. While the operator can’t do anything about external temperatures controlled by nature, accurately assessing the energy requirements of your datacenters means that you need to establish as steady a state as possible. Getting in the habit of using inexpensive blanking panels, which these days are available as snap in pieces ranging from 1U to 42U in size, can help your datacenter meet its energy consumption targets.