Fresh air could save millions in datacenter cooling costs, Intel has claimed, after a successful experiment in the New Mexico desert.
Replacing air conditioning by piping in outside air saved power costs with no appreciable increase in server failure rates, the company concluded in a research paper. Despite a lot of dust and major temperature changes--both long considered undesirable in datacenters--the equipment wasn't affected, said Intel.
"Servers... were subjected to considerable variation in temperature and humidity, as well as poor air quality; however, there was no significant increase in server failures," said the paper. "If subsequent investigation confirms these promising results, we anticipate using this approach in future, high-density datacenters."
Intel estimated an annual cost reduction of approximately $143,000 (£79,000) for a small, 500kW datacenter, based on electricity costs of eight cents per kWh. In a larger 10MW datacenter, the estimated annual cost reduction was $2.87 million.
Intel used a normal air filter that took larger particles out of the air but not fine dust. While the 32 servers and racks became coated in dust, and humidity was monitored but not controlled, the failure rate was 4.46 percent, compared with a 3.83 percent failure rate in Intel's main datacenter over the same period.
The experiment was run for 10 months, between October 2007 and August 2008. Server units with over 900 blades, used for production design, were split into two compartments. One of the compartments was air cooled, with temperatures ranging from 18 to 32°C. The other compartment was cooled using air conditioning, and used as a control.
Intel set up the experiment to challenge assumptions about optimal operating conditions in datacenters. Received wisdom has it that temperature, humidity and air quality must be strictly maintained.
However, Intel set out with the premise that, as servers are designed for optimal performance in temperatures of up to 37°C, using air cooling in desert regions could be feasible.
The experiment was run as part of the 'Intel IT's Eight-Year Data Center Efficiency Strategy' program, which aims to reduce datacenter costs.