X
Tech

Facebook: 'Open hardware' integral to green IT infrastructure

Open Compute Project strives to model itself after the Apache Software Foundation, with technical contributions vetted by community members.
Written by Heather Clancy, Contributor

Is the secret to building greener, more energy-efficient data centers using "open hardware" optimized for that purpose?

Facebook and its allies in the Open Compute Project certainly would have you believe that this is so. Right now, most server hardware vendors invest in "gratuitous differentiation" and not true innovation, according to Andy Bechtolsheim, the chief development officer for Arista Networks and co-founder of Sun Microsystems who is one of the Open Compute Project's five newly named board members.

"What has been missing is standards at the systems level," Bechtolsheim told attendees of the Open Compute summit held this week in New York.

If you step back and look at what Facebook was able to accomplish at its Prinville, Ore., data center, the evidence certainly suggests that Internet service providers or those building cloud infrastructure would do well to embrace some of this philosophy.

Facebook built its own servers for that facility, adopted a new power distribution design, and approached each rack holistically in order to create drive efficiencies. That facility can run work loads using up to 38 percent less energy than its counterparts at a 24 percent cost reduction, according to the Open Compute Web page. "This isn't just important for the environment, it is green for the bottom line," said Frank Frankovsky, director of technical operations for Facebook and chairman for the Open Compute Project.

Let's be clear. Open Compute Project isn't a standards organization. It is modeled after the open source software movement, which Frankovsky and other Open Compute Project members suggest has spurred innovation that has far outpaced true advances in the software world. Community members are being encouraged to contributed designs and architectural best practices. For example, ASUS has submitted motherboard specifications while Facebook is opening up its OpenRack specifications.

The other thing to keep in mind is that there is a lot more to a data center than just the information technology housed within.

In his presentation during the Open Compute Summit, James Hamilton, vice president and distinguished engineer with Amazon Web Services, pointed out that there has been more data center innovation in the past five years than in the past 15 years -- inspired by the challenges of scale commputing.

The cost of infrastructure directly impacts service costs, so efficiency is paramount. During his talk, here are several issues that Hamilton discussed:

  • Virtualization: Intuitively, current best practices suggest that less is more in the data center. But Hamilton suggests that data center managers think twice about turning off a server just to save power. "Any workload that is worth more than the marginal cost of power is worth it," he said.
  • Power distribution: Up to 11 percent of the power that heads into a data center is typically lost through conversions and other legacy design issues in the grid. Any conversions that can be eliminated along the way, SHOULD be eliminated. UPS technology, he suggests, is in for an overhaul. (Facebook, as an example, has redesigned the way it includes back-up power in its racks.)
  • Temperatures must rise: Even though most managers run their data centers at 77 degrees Fahrenheit today, most systems can tolerate much higher temperatures. If everyone raises the temperature AND talks about the results, temperatures will rise across the sector.
  • Use outside air 100 percent of the time for cooling, period.
  • Look to water cooling technologies, including evaporative cooling methods.
  • Think modular: Regardless of the specific servers they use, the most efficient data centers in the world all use modular architectures that use some sort of outside cooling mechanism. That includes Microsoft, Facebook and Amazon.

The cynic in me believes that the Open Compute movement has as long uphill battle to face in existing data center environments. Yet, as the industry moves to scale computing architectures that can support cloud-delivered infrastructure services and applications, it is clear that the hardware world may be holding things back. That, in itself, is a reason to keep a close eye on Open Compute Project development.

Editorial standards