I was speaking with Ken Brill, the founder of the Uptime Institute, about a project he's working on. The key point is that system suppliers are making claims that the power saver mode that is part of most new industry standard systems is saving organizations a bundle on power. Ken doesn't think that is really the case would like to gather data that allows him to track power usage by transactions.
I thought that this was a rather ambitious project as "transactions" most certainly include an organization's mainframes, midrange machines as well as their industry standard systems. So, power saver mode on one type of equipment may not have a significant impact.
Another point is that many devices are shared resources. Devices such as network routers, storage servers and the like are likely to continue to run in normal mode because demand for their services is spread across many workloads and servers. So, power save mode on one or a number of industry standard systems may have little impact on power consumed by these "infrastructure devices."
Does your organization have any data that would tend to prove or disprove Ken's thesis? If so, let me know.