3 of 4Image
Filled aisles are blocked off using panels
Each 5,381-square-foot module can take up to 750KW of IT load, although, if needed, some can go as high as 825KW. Each module can support up to 204 fully sized racks or 254 smaller racks.
The average draw per rack is 4KW, but each rack can suck down as much as 20KW of power, providing dummy servers are positioned either side of it. These dummy server panels ensure separation between hot and cold aisles and keep the power draw stabilised.
Each module has a design life of between 20 and 25 years, according to Colt.
When filled out, each processing aisle (pictured) within the datacentre becomes self-contained. Cold air circulates in at the bottom front grills, and hot air circulates in via the back.
Because the new aisles have a lower power usage effectiveness (PUE) than the legacy hardware on site — 1.21 versus 1.6 — Colt is moving the most power-hungry hardware over in stages, with cloud hardware going in first.
The low PUE is attained by a combination of hot and cold aisle separation, airflow modelling and free-air cooling. The airflow modelling is done via a computational fluid dynamics program, which orchestrates the positioning of each rack according to power consumption to assure a smooth air path throughout the module.
When touring the module, ZDNet UK saw IBM blades, IBM system storage and Cisco networking. Ruddock said any hardware that can be racked can be supported, as Colt acts as a co-location provider.
Photo credit: Jack Clark
See more of the datacentre tour on ZDNet UK.