Given that data centres consume relatively large amounts of power they sometimes attract hostility from some quarters, particularly environmentalists. Specifically, I’m thinking of the Greenpeace report, How Dirty is Your Data that seeks to highlight the need for greater transparency from global IT operators.
While this particular report was not overtly hostile, for example, it acknowledged the many benefits of IT and was even handed in its approach, it did highlight some startling facts. For example, it cited a new data centre in the US run by a well known technology vendor that was tipped to consume up to 100 MW of electricity. To put that in context, that’s about 80,000 US homes, or 250,000 homes in the European Union.
I’ve blogged before about the importance of cloud to energy efficiency in data centres. In fact, I’d go as far as saying that the success of cloud computing really depends on aggressive power management.
This requires a policy-based approach in which each energy component is identified, evaluated and managed. To begin with, there are a number of steps that can be taken to operate a more energy-efficient cloud, ranging from better instrumentation to better power management at the server and rack level, and power management at the facilities level. In turn, this will deliver optimised power consumption and reduce the total cost of ownership.
You’d be surprised at the energy savings that can be made if such a policy also includes the capability to monitor and cap power in real-time at the server, rack, zone and data centre levels.
Real-time server power monitoring, for example, can lead to power aware scheduling. Virtual machines can be relocated from power constrained systems to power unconstrained systems for better system utilisation and performance across different clusters.
Optimising rack density is another power management method that can make a significant difference. Intel experiments have proven that dynamic power server capping can lead to a 30 to 50 per cent increase in server density and maintain the same power envelope for each rack.
The Intel experiment mentioned above was based on specific hardware and software combinations but it does illustrate the potential for making sweeping power savings. However, it must be pointed out that opportunities to reduce energy use through power capping alone are limited.
The bigger picture needs to change in many facilities. Typically, data centre computing capacity has been based on potentially misleading sources such as peak server power consumption or an approximation based on power loads.
While the industry is advancing and becoming more sophisticated it’s still a reality that in practice, actual power consumption with real data centre loads is much lower than the actual system specifications. This inevitably leads to data centres that are over provisioned for cooling and power capacity.
While cloud technologies are being rightly touted for their management benefits, OPEX rather than CAPEX spend, and ‘on-tap’ application delivery, correct power management can also bring significant benefits for service providers.
I can only touch on the subject in this blog, but I can say while energy-efficiency for data centres that deliver cloud services is still a work in progress lots of advancements have already been made. In short, the use of cloud technologies can give us better understanding and control over server power consumption and enable more energy-efficient data centres. Having a clear set of policies based on real and accurate data will be the key to an informed response to the critics.