X
Tech

How Facebook ended up with baked potato inside its servers

Facebook's open hardware chief on how attempts to design the world's most efficient datacentre led to the smell of fries and lots of hungry engineers.
Written by Nick Heath, Contributor

The smell of chips cooking in the datacentre is generally never good news.

But the unmistakable odour being given off from servers being tested by Facebook earlier this year was more French Fries than modern microprocessor.

How did Facebook end up baking potatoes inside servers? As Facebook's VP of hardware design and supply chain operations Frank Frankovsky explained, it stemmed from Facebook's experiments under the Open Compute Project (OCP).

Under the OCP, Facebook and its partners have committed to developing novel designs for compute, storage and general data center infrastructure — not just the servers themselves, but the chassis and racks they sit in and their associated power and cooling, and then to sharing those designs so they can be refined and built upon.

Frankovksy and his OCP partners had been looking for a way to reduce the amount of waste material generated by a server, which had led them to remove the server's lid. The problem was that a lidless server didn't direct enough air over the top of the CPUs for cooling. Not being a big fan of adding environmentally-unfriendly components they hit on the idea of using the material used to make Spudware, kitchen utensils made out of 80 percent potato starch. Unfortunately there was a downside.

"We created a thermal lid out of that starchy material and found out pretty quickly that when you heat that up it smells a lot like French Fries, so people in the datacentre were getting pretty hungry. It also gets a little floppy and gloopy," he said at a briefing in London today.

It's not the first time that Facebook's OCP experiments resulted in some pretty unorthodox outcomes.

Earlier this year Facebook talked about how an actual cloud had formed inside its datacentre in Prineville, Oregon as a result of water condensing out of air that had passed through its fresh air cooling system, leading to power supply failures and servers shutting down and rebooting .

The upshot is that the spec for OCP power supplies have a special coating to prevent condensation from forming.

"We learned a lot from that and we applied some conformal coating. Now this second condensation event occurred and we had nothing fail," said Frankovsky.

 

Pushing the limits of the datacentre

In general, Frankovsky said that datacentre operators are generally reluctant to deviate from tried and tested ways of building and running servers, something which costs them in terms of efficiency.

"What I would say to those operators is 'Start pushing that envelope a little bit harder'," he said, adding that computer hardware can survive in far more challenging conditions than is generally accepted.

"Computing is fairly resistant to heat, humidity and even condensation, believe it or not."

Facebook runs its datacentres without computer room air conditioning, uses 100 percent outside air for cooling, removes the room-wide Uninterruptible Power Supply and delivers "higher voltage AC power directly to the server".

As a result Frankovsky said Facebook's datacentres achieve 1.07 power usage effectiveness (PUE) rating, far better than what he called the "gold standard" for datacentres of 1.5PUE.

PUE measures the ratio of the total power delivered to a facility to how much gets to a server – so a datacentre with a PUE of 1.5 needs to draw down 1.5W of power to get 1W to a server.

"There are very few areas in the world that are so hot and humid that you can't get the inlet temperatures to a point where the electronics would survive," he said about the decision to remove air conditioning.

"But even if air conditioning isn't a risk they [datacentre operators] are willing to take, there is a lot that can be done in electrical efficiency.

"Eliminate the room-wide UPS, a room-wide UPS costs about $2 per watt, the Open Compute battery cabinets that we've also open-sourced, those are about 25 cents per watt.

"So not only could they save a bunch of money from a capex perspective, while delivering the same amount of backup power functionality, but it's also far more efficient because they're not transferring the AC to DC conversions, so they're not losing the power."

Facebook and its OCP partners even go as far as incorporating the logistics of how equipment is transported to the datacentre into their designs.

Talking about the designs used for equipment being sent to Facebook's Lulea datacentre in Sweden, which uses entirely OCP-designed infrastructure, Frankovsky said: "We designed the rack enclosure, as well as the palettes that it transported on to be able to plug a truck 100 percent. We want to make sure that every one of those trucks is absolutely plugged with equipment, so we don't have any wasted transportation costs."

Further reading about the Open Compute Project

Editorial standards