Just two short years afterits open hardware is starting to spread.
Under the Open Compute Project (OCP), Facebook and its partners are committed to developing and sharing designs for compute, storage and general data center infrastructure — not just the servers themselves, but the chassis and racks they sit in and their associated power and cooling.
Servers based on Open Compute's Freedom design are in service in Facebook's North Carolina and Prineville, Oregon data centres, and will fill. Likewise a data center owned by hosted services giant Rackspace, one of the board members of the OCP foundation, will in a few weeks begin using servers based on a mix of the Freedom specification and Rackspace's own design, and an OCP storage system.
But what is the role for Open Compute hardware beyond Facebook and Rackspace, and could open-source hardware become the norm inside the data center?
Mark Roenigk, COO of Rackspace, is bullish about the rate of Open Compute adoption, saying he believes Open Compute certified hardware could become mainstream within three years — a quarter of the time it took for Linux to reach a similar level of adoption as a server OS, he says.
"We will see significant Open Compute infrastructure by that time," he says. "I think we'd be approaching between 35 and 50 per cent of new installations of servers."
Since initial adoption of Open Compute hardware will most likely be by large web and hosting companies, which have much to gain from minimising running costs of their large IT estate through custom data center designs, Roenigk says OCP hardware adoption will be tied to their refresh cycles — which range from about 19 months for large web companies to about four years for Rackspace, according to Roenigk.
Cost is a key reason for Roenigk's confidence that Open Compute hardware will spread beyond Facebook and Rackspace. Open Compute hardware is designed to be have low purchase and running costs relative to alternate hardware. Designs are targeted at specific computing needs (the hyperscale computing needs of Facebook, for example), removing all extraneous components and materials from servers and their associated infrastructure.
As a measure of how this focus can translate into lower running costs, Facebook says its OCP-stocked Prineville datacentre has a power usage effectiveness (PUE) of 1.09, which is better than the US Environmental Protection Agency's best-practice rating of 1.5.
The pain and cost for a company of coming up with a design for a rack or motherboard to suit their specific computing needs can also shared between the pool of engineers working on the Open Compute project.
"There's the ability to collaborate with engineers all over the world," says Roenigk. "Let's say a company had 40 people working on energy efficiencies and sustainability initiatives, at Open Compute we've got a wide membership and I would certainly hope they'd be able to innovate faster."
"It's changing my economics in such a big way, my speed to market is cut in half. There's huge competitive reasons why this is really, really important for us," says Roenigk.
At the last Open Compute Summit in Santa Clara, new OCP designs included specifications for motherboards, chipsets, cabling and common sockets and connectors.
And as the pool of OCP specifications for data centre equipment grows, Roenigk notes that more organisations will be able to find a design for a server, storage or other data center equipment that suits their needs.
"If I was starting a service provider today I would become a member of Open Compute and I would go and get Rackspace's design. I just get their design and I know exactly the full bill of materials, who built it and the pricing of it," he says.
On top of the custom design benefits are the economies of scale: by piggy-backing on orders for Open Compute hardware by a major companies like Facebook and Rackspace, a smaller player can benefit from similar low costs to those that large companies are able to negotiate with component suppliers and server manufacturers.
"Now I'm a small guy, but I get to pay the same price as the big guys. For those guys it's no-brainer. They're going to do it to increase their chances of survival," says Roenigk, adding that even the biggest companies can further drive down equipment costs by pooling their buying power.
Support for Open Compute hardware differs from the support contracts offered by OEM server vendors. Organisations using OCP hardware "don't even have to have engineers on staff technically, as they could just utilise the engineers in the community," says Roenigk, although he acknowledges that the timeframe for support delivery is less certain: "The challenge is the service level. If an organisation had an issue, how quickly could somebody respond to it who wasn't a direct employee?"
Using OCP hardware currently requires organisation to up their internal investment in support; Rackspace, for example, has taken on an additional nine staff to manage Open Compute hardware. That cost is offset, Roenigk claims, by savings on hardware and running costs.
"Just the hardware savings alone the minimum is 10 per cent, and depending on how well you integrate and virtualise it it could be 40 per cent,” he says. "When you consider the additional cost of the licensing fees and integration activities, I can bring in an additional engineer or three and still put savings in my pocket."
Going down the OCP route can also means taking additional responsibility for the data centre supply chain — for sourcing components, finding server manufacturers and carrying out systems integration to ensure all these parts work together.
"When it comes to server support and supply chain management there are a lot of companies that don't even want to hire one or two people to oversee the process. That's the spot where the Dells and HPs are going to play," says Roenigk.
"But what we're finding is the OCP providers also provide very similar levels of supply chain management," he adds.
The modular data center
The DIY design ethos of Open Compute reflects a wider shift in the data center market, of big web companies looking for alternatives to server designs from OEMs like Dell and HP, which traditionally designed and sourced server hardware. For years firms likehave been drafting their own servers and associated infrastructure, allowing for custom setups to maximise cooling, server density or whatever else suited their needs.
While AWS and Google still jealously guard the details of their data center designs, viewing the efficiency and effectiveness they afford them as a source of competitive advantage, Facebook struck on the idea that it could become more efficient by crowdsourcing their ideas. The social network decided to share the problem of designing, sourcing, purchasing and integrating data center hardware, and.
A central idea behind the project is to break components of the data center, rack and server into modular parts that can be swapped out as computing needs change. The goal is to move away from the wastage normally associated with the server upgrade cycle. Traditionally an organisation that only wanted to swap chip architectures inside it servers would find itself also having to upgrade the entire server — as the new chip would need a new motherboard that could, in turn, require new memory and network controllers.
This need to upgrade en masse has restricted organisations' ability to upgrade as and when they want. An organisation that wanted to upgrade their CPU on an annual basis, for example, might only do so every three years because of the associated cost of also upgrading the motherboard and other components, and the need to wait for the right memory, network cards and associated hardware to become available.
This interoperability between different server boards and components requires standardisation around new motherboards and backplane interconnects. At the project's Santa Clara summit earlier this year, the Group Hug slot architecture was revealed, whose design would allow server motherboards to accept ARM SoC, AMD or Intel chips.
To help link these disaggregated components together at an acceptable speed, Intel is working on silicon photonic interconnects and cable designs. These are intended to enable 100Gbps interconnects that have 'such low latency' that components that previously needed to be bound to the same motherboard can be spread out within a rack.
The most likely earlier adopters of OCP equipment are major web companies like Facebook or major hosting providers like Rackspace, looking for ways to drive down the build, running and refresh costs of their large IT infrastructures. Rackspace, for example, will serve 70-75,000 customers per data center, with each customer running between one and 25 applications. Alongside Rackspace some of the world's largest ISPs and cloud service providers, such as Chinese firm Tencent and cloud CRM specialist Salesforce.com, are members of the project.
The financial sector, another prime market for OCP hardware, is well represented among project members, with Goldman Sachs being one of the five board members of the OCP Foundation. Facebook's Frank Frankovsky, VP of hardware design and supply chain operations at Facebook and chairman of the Open Compute Project, says financial services companies' interest in the project can be explained by the fact they "are IT companies more than they actually know", thanks to their large-scale compute environments.
Other traditional enterprise sectors may be slower to adopt OCP, says Roenigk, citing the tendency of CIOs to play it safe in their infrastructure investments, although that reluctance may be lessened by financial constraints.
"Adoption by the enterprise is always really slow. CIOs always want to do something that's really safe and secure. Typically their roadmap is a year out as they know what their budget is going to be," says Roenigk.
"But more and more we're seeing CEOs and CFOs instructing CIOs they need to reduce the cost of compute."
As cloud service use by SMEs and enterprise picks up, it's also likely to fuel Open Compute adoption, as servers move from OEM-friendly big business to cloud service providers more open to the OCP's design-your-own ethos.
However, adoption of cloud services by larger businesses has been relatively modest to date, with enterprise favouring private cloud and only using public cloud to provide additional capacity to cope with spikes in demand, according to Roenigk.
Two major holdouts for the project are Amazon and Google, and there's doubt whether they will be willing to sacrifice the operating advantage that their closed approach to datacentre design gives them.
"I think the Googles of the world will stay proprietary for another year or two. They're on the sidelines watching what Open Compute will do," says Roenigk. "They have intellectual property in the design of that infrastructure and they already have great economies, so they are already getting a great price."
Open Compute, not for everyone...at least for a while
Not every organisation relies heavily enough on IT infrastructure, or has such specific computing needs, that it would realise significant benefits from the disruption of swapping OEM servers and generic datacentre infrastructure for OCP-designed kit.
Laurent Lachal, a senior analyst leading Ovum's cloud computing research, believes that adoption will slower than Roenigk expects, suggesting the appeal of Open Compute equipment will be limited to companies with IT estates of the size and scale OCP's foundation board members for some time.
"Because of the very nature of its target audience, it's limited. The organisation that launched it is Facebook, a company which needs a very, very large data center estate, therefore the effort to create their own datacentre infrastructure is well worth the cost reduction this results in. All these large companies have the business case to get involved," he says.
Conversely many smaller businesses and enterprises haven't entertained the idea of swapping OEM servers for open-source alternatives, Lachal claims.
"A survey was carried out last August where businesses were asked about Open Compute and the majority either didn't know or weren't interested. It's a good reflection that the market at large is not really relevant to them, on top of the fact these organisations don't quite understand what it's all about."
Beyond the spread of OCP-compliant hardware, Lachal expects the project's emphasis on modular design will also influence the design of mainstream data center equipment.
In the long run Lachal believes the Open Compute Project will become more widely recognised as a standards body for open-source hardware and Open Compute hardware will eventually find its way into wider enterprise and smaller businesses.
"There will be a trickle down of Open Compute in the long run. It will take about five years to gain significant adoption among the major backers and 10 years to trickle down to other companies."