Much has been made of Google's approach to equipping their datacenters. Barebones boxes built from commodity hardware, directly to Google's specs, they do away with special bells and whistles in favor of simple standardized systems, installed by the thousands and both indistinguishable from their brethren and interchangeable with any other system. This approach has made many datacenter operators rethink the way that they equip their facilities.
One of the best known responses to the way that Google was equipping its datacenters is the Facebook Open Compute Project, which has a goal of demystifying the server hardware in the datacenter and has "open-sourced" the designs currently used by Facebook for their own datacenters, as opposed to the Google approach of treating their in-house server designs as proprietary information.
One thing both approaches have going for them is economies of scale; when you are building thousands of the same tightly focused design, the price per unit goes down. Facebook hopes to drive it even further down by bringing others into the fold, so that the designs they purchase aren't for their needs alone, but regardless, the simple fact is that the number of servers necessary to equip the datacenters of Google and Facebook is a large one. And the benefits of thousands of identical servers go beyond the simple economics of the build costs. The ability to treat servers as interchangeable cogs and to develop heuristics based on server performance that will apply across your datacenter is quite valuable, both in terms of planning and evaluating and deploying systems in the most efficient manner possible.
One of the rumors surrounding Facebook's datacenter hardware was that they were looking at implementing large numbers of low-powered CPU servers, running ARM or Atom processors, as yet another step in increasing the efficiency of their datacenters. At this week's International Green Computing Conference Facebook went a step past rumors and delivered a paper on Memcached performance compared across traditional Intel and AMD server CPUs and the new Tilera 32-bit TilePro64 CPUs.
Memcached is the in-memory database that Facebook, and a many other web-based enterprises, use to deliver data to users faster than they could if it needed to be pulled from disk. And the results comparing the traditional CPUs to the multi-core, dense packed Tilare showed that on a watt by watt basis, the Tilrea processors delivered significantly better and more scalable performance, making their use in this application a potential advantage for entities like Facebook that rely on Memcached to deliver data to their users with acceptable performance.
Facebook hasn't committed to using these Tilera-based systems in their datacenters, and should they choose to they will find themselves carrying the flag not just for an open datacenter, but for the entire microserver industry. And while Google is still getting kudos for their efficient datacenter designs, Facebook may leapfrog them with custom designed microservers that further increase the efficiency of their datacenters while improving their ability to deliver services to customers.