Researchers at the University of Michigan have announced a plan to save up to 75 percent of the energy that power-hungry computer data centers consume by putting idle servers to sleep when they’re not in use.
Thomas Wenisch, assistant professor in the department of Electrical Engineering and Computer Science, and his team analyzed data center workloads and power consumption and used mathematical modeling to develop their approach. It will include PowerNap, an energy conversation method that eliminates almost all the power used by idle servers, and Redundant Array for Inexpensive Load Sharing (RAILS), a more efficient power supplying technique.
Data centers waste most of the energy they draw because of strict service-level agreements that require the facilities to be ready for peak processing demands that are much higher than the average demand. Idle energy waste, coupled with the loss of power in delivery and cooling infrastructure, increase power consumption by 50 percent to 100 percent, according to the researchers.
“For the typical industrial data center, the average utilization is 20 to 30 percent. The computers are spending about four-fifths of their time doing nothing,” Wenisch said in an article published by the University of Michigan. “And the way we build these computers today, they’re still using 60 percent of peak power even when they’re doing nothing.”
In a podcast, Wenisch drives the point home further by stating that the entire carbon footprint of the world’s data centers is about as large as the Czech Republic.
According to the team, the techniques employed today, such as dynamic frequency and voltage scaling, don’t do enough to conserve power. Instead, servers could sleep periodically like ordinary laptops. However, the unique demands placed on servers poses some challenges. A large fraction of servers exhibit frequent but brief bursts of activity, so they would would have to slumber and wake exceedingly fast, Wenisch says. Their average idle period is mere hundreds of milliseconds, while their average busy period is even shorter, at tens of milliseconds. (A millisecond is one-thousandth of a second.)
A paper, hot off the press, states that many mechanisms required by PowerNap can be adapted from mobile and handheld devices, but one critical subsystem of current blade chassis falls short of meeting PowerNap’s energy efficiency requirements: the power conversion system.
“PowerNap reduces total ensemble power consumption when all blades are napping to only 6 percent of the peak when all are active. Power supplies are notoriously inefficient at low loads, typically providing conversion efficiency below 70 percent under 20 percent load. These losses undermines PowerNap’s energy efficiency. Directly improving power supply efficiency implies a substantial cost premium. Instead, we introduce the Redundant Array for Inexpensive Load Sharing (RAILS), a power provisioning approach where power draw is shared over an array of low-capacity power supply units (PSUs) built with commodity components. The key innovation of RAILS is to size individual power modules such that the power delivery solution operates at high efficiency across the entire range of PowerNap’s power demands.”
Wenisch said that PowerNap would also require a new operating system to coordinate the instantaneous sleeping and waking.
The research team will present their approach for improving the energy efficiency of data center computer systems on March 10 at the International Conference on Architectural Support for Programming Languages and Operating Systems in Washington, D.C.