Facebook has published the specifications and design files needed for companies to replicate its high-efficiency server, rack and datacentre designs.
Through the Open Compute Project, announced on Thursday, Facebook has released the specifications required to build a modern, highly efficient datacentre, from the basic server motherboard to the overall design of the facility. The knowledge comes from a multi-million dollar investment Facebook has made over the past couple of years as it has built its first dedicated, 300,000-square-foot facility in Prineville, Oregon.
Facebook's datacentre facility in Prineville, Oregon runs on Open Compute Project hardware. Photo credit: Alan Brandt
"Facebook and our development partners have invested tens of millions of dollars over the past two years to build upon industry specifications to create the most efficient computing infrastructure possible," Jonathan Heiliger, vice president of technical operations at Facebook, said in a statement on Thursday. "Today we're launching the Open Compute Project, a user-led forum, to share our designs and collaborate with anyone interested in highly efficient server and datacentre designs."
"To my knowledge, this is the first time an industry-leading design has been documented in detail and released publically," James Hamilton, an engineer at Amazon Web Services, wrote on his blog after the announcement.
Datacentre design methods
Facebook said that it had co-developed the technology with AMD, Dell, HP and Intel. Dell's enterprise-focused Data Centre Solutions division will design and build servers based on the released specifications, and Synnex Corporation will serve as a vendor for Open Compute Project servers.
The design methods that Facebook applied to its Prineville datacentre extend from the server motherboard up to the server chassis, the rack and beyond, into the overall air flow specifications and design of the datacentre building itself. Combined, these approaches show a reported power-usage effectiveness (PUE) rating of 1.07. PUE measures the power used for non-IT purposes in a datacentre, so a PUE of 1.07 says that for every one watt used by IT hardware, 0.07W are used in cooling, lighting and other infrastructure.
For perspective, a recent datacentre built by PUE-optimisation specialists Keysource has reported an annualised PUE of 1.12; a facility by Colt reports 1.21; and Google, which operates a fleet of large datacentres across the world, has reported annualised PUEs of 1.10 and 1.21 for the fourth quarter of 2010.
Facebook says its approach has delivered a 38-percent increase in energy efficiency at a 24-percent lower cost.
The specifications Facebook has made available include technical documents and the basic computer-aided design (CAD) files needed for construction of the datacentre components. In theory, these would allow any organisation to modify the designs and send the CAD files off to a manufacturer, and have the servers and supporting infrastructure built and sent back to them.
"We think it's time to demystify the biggest capital expense of an online business — the infrastructure," Heiliger said.
Its motherboards come in two varieties: Intel and AMD, both with two processors each and 144GB or 192GB of memory, respectively. The Intel board uses either Xeon 5500 or 5600 series processors and the AMD Magny-Cours 12 or 8-core CPUs. Both motherboards have a feature that allows them to be rebooted and have their BIOS or firmware updated remotely over their LAN, saving time in case of software failures.
Instead of opting for a standard one-rack unit (1U) or 2U tall server, Facebook's servers are 1.5U, using larger fans within the server while retaining efficient use of space. Larger fans require less energy to process more air and so are more efficient than their smaller counterparts.
Additionally, the overall chassis has been designed to allow hardware to be installed with a minimum of tools. The motherboards are retained by a single screw, the hard drives with no screws at all. This eases the process of swapping hardware in and out for maintenance, testing and repair.
Facebook has designed a special triple-rack cabinet for its servers. Each cabinet contains two top-of-rack switches and each of the three columns in the cabinet contain 30 servers, for 90 1.5 rack unit servers — over 180 processors — per cabinet.
The social-networking company did not disclose how many cabinets it had deployed within its datacentre.
Facebook is not alone in its class in publishing hardware details, but unlike Google and Amazon, which selectively detail aspects of their servers or overall ethos for datacentre design, Facebook has opted for a level of disclosure sufficient for competitors or start-ups to mimic its underlying hardware.
However, the combination of scale and homogeneity gives Facebook advantages over companies operating a diverse hardware stack.
The Facebook datacentre design won't apply to everyone, just like it probably doesn't apply for some of its own IT application environments.– Mark Thiele
"The Facebook design won't apply to everyone, just like it probably doesn't apply for some of Facebook's own IT application environments. The variety of hardware and legacy application and physical architectures in most large IT shops mean that it's a non-starter to consider building something that is one size fits all," Mark Thiele, founder and president of non-profit datacentre industry community Data Center Pulse, wrote on his blog.
"That being said, it doesn't mean there aren't one-size-fits-all environments; they just aren't designed to the same efficiency ratings being claimed by Facebook," he said.
Get the latest technology news and analysis, blogs and reviews delivered directly to your inbox with ZDNet UK's newsletters.