X
Tech

Component-level rightsizing to reduce datacenter power, costs

Companies should focus on optimizing specific components such as memory and storage for greater power and costs savings, urge industry experts, although one adds not all companies have the scale to reap these benefits.
Written by Kevin Kwang, Contributor

SINGAPORE--Web 2.0 companies such as Facebook and Google are leading the charge toward more innovative datacenter designs, with a focus on optimizing existing hardware components to deliver better performance while cutting down cost and power. This, industry observers note, is something other enterprises would do well to look into.

Avneesh Saxena, group vice president of IDC Asia-Pacific, said the top three objectives for managers of datacenter facilities today are "lowering cost, building scale and improving returns on investment (ROI)". Considering that the earth's power resource is "finite" and will only go up, he added that power costs will increasingly dominate datacenter priorities going forward.

Currently, data centers contribute to 1 percent of the world's electricity consumption and this could very well jump to as high as 3 to 4 percent in the near future, Saxena said during his keynote at the inaugural Asia-Pacific Datacenter Leadership Council conference held here Friday.

Contributing to this rising power costs is denser configurations of datacenter hardware, the analyst said, adding that "preconfigured roll-on boxes" from Oracle and SAP, for instance, may take up less space but these devices consume "a lot more power". He added that as more companies virtualize their data centers, this would also increase energy usage in data centers as more of the CPU is utilized by virtual machines.

As such, David Fosberg, vice president of Samsung Asia, said companies such as Microsoft, Google and Facebook are leading the way in terms of how they are relooking their datacenter architecture to optimize costs and efficiency.

Facebook, for one, made available its datacenter design via the Open Compute Project it initiated in April. According to an earlier report, the social networking giant's data center has no air conditioning but relies on a water-misting system for cooling. Hot air coming off the servers is then recycled to provide heating to attached buildings. Its server chassis was also stripped down to almost nothing, using 22 percent fewer materials, noted Amir Michael, who designed the new server.

Component-level savings
Also a speaker at the Datacenter Leadership conference, Fosberg noted that data centers account for 23 percent of global ICT power usage, and this level of consumption is forcing CIOs toward "micro-focusing on component savings".

Citing a survey jointly conducted by IBM, Samsung and Dell, the Samsung executive then pointed out that one of the datacenter management "pinch points" revolves around DRAM (dynamic random-access memory), which consumes 22 percent of power and constitutes 50 percent of the overall bill of material for a typical data center.

This led to Samsung's move to invest in what he called a "chase for nanometers", as the Korean memory manufacturer looks to bring down the number of DRAM nanometers. During his presentation, Fosberg launchd Samsung's latest double data-rate 3 (DDR3) DRAM modules at 30nm and 1.25 volts, which he said would help companies bring down their server power usage and overall datacenter costs.

"Asia-Pacific enterprises looking for competitive advantage in reducing costly datacenter energy consumption and cooling costs are now examining how choices made at the memory module level, within their servers, can improve total cost of ownership (TCO)," he stated. He added that Samsung's new 1.25 volt DDR3 module uses 15 percent less energy than the previously most advanced DDR3, and 60 percent less than mainstream modules.

Lim Wei Wah, Microsoft's head of Asia IT infrastructure services, lent credence to Samsung's claims, noting that since using the latter's 30nm memory module from its previous 50nm DRAM in its data center, Redmond was able to reduce its memory utilization from 33 percent to 15 percent.

Additionally, by lowering the nanometer count, less energy is now needed to maintain the cooling temperature in server rooms and these servers will be better able to withstand higher temperatures, said Lim, who was also a conference speaker. He noted that by reducing 18 percent of power consumption at server system level, this represents a 23 percent power reduction at the datacenter level.

The Microsoft executive then suggested that "rightsizing of one's server configuration can improve overall energy efficiency". He said companies should decide whether their hardware additions, whether these are more servers, storage or memory, would be optimal for the business tasks they are intended for.

Not applicable for all
Asked if such component-level savings should be applicable to all companies, IDC's Saxena told ZDNet Asia at the conference sidelines that it would be beneficial for "anyone who has significant scale in terms of servers".

For small companies that have a defined set of servers, the impact might less significant, he noted.

The analyst went on to add that 600 to 700 servers would probably be the baseline for companies looking to reap benefits from optimizing the core of their IT infrastructure, but this could decrease in time as "convergence" in the datacenter environment continues and technology improvements for equipment such as blades kick in.

Saxena added that not all innovations from Web 2.0 companies should be applied in traditional enterprise datacenter settings, but these developments would "expand the discussion" in terms of how organizations envision future datacenter initiatives.

Korean companies, for example, which are facing constraints in land space and costs, might take a leaf out of Google and deploy portable data centers out of the city center, the analyst added.

Editorial standards