HP is in a tough position when it comes to cloud customers, and its low-power high-density Gemini server acknowledges that.
Gemini, with its ability to support multiple processors of multiple generations via different "server cartridges", sees HP take a new approach with its servers, placing the emphasis on maintainability, serviceability and, crucially, support for the types of energy thrifty non-x86 chips made by Intel-rival ARM. It is part of the company's Project Moonshot effort, which sees the giant cosy up to large cloud operators by committing to the development of very dense, very low-cost servers.
As clouds have grown, large datacentre operators like Facebook and Google have taken a skeptical look at the expensive, proprietary equipment they have been buying from major enterprise supplies like HP, Cisco, Dell and IBM and have fought back by designing their own hardware.
Google, for example, moved its internal network to the open source OpenFlow software-defined networking technology this year and sat it on top of Google-designed networking gear which talked to (some) Google-designed servers. Facebook, meanwhile, has turned to Asian Original Device Manufacturers (ODMs) like Quanta to build its low-cost servers based on the specifications it is attempting to popularise through the Open Compute Project.
With Gemini, HP hopes to court the large companies that have drifted away from it.
"We certainly understand their interest in designing their own servers and having them built one way or the other, but we think that [Project] Moonshot gives us an infrastructure into which we can deploy servers that are workload optimised," Ed Turkel, HP's marketing manager for high-performance computing, tells me. "Each individual server card could be designed for a particular workload or a particular purpose that would lend themselves well to being used in these particular types of environments where currently they are designing their own [hardware] to their own infrastructures."
The Facebook Connection
A further point of interest in the scheme is the relationship between the design of Gemini and some of the technologies and techniques popularised by Facebook through the Open Compute Project.
At the moment, Facebook is advocating the design of servers, storage and racks that place an emphasis on modularity and serviceability. Conversations I've had with Facebook's head of IT infrastructure, Frank Frankovsky, have indicated to me that the company plans to push this further, with rough goals of moving the network interface cards off of individual servers and into a federated top-of-rack switch. Basically, the Open Compute Project hopes to unbundle as many of the technologies of the datacentre as possible, so kit can be swapped out very easily.
With Gemini, HP has taken these design cues and run with them, probably because the same people that built Gemini also work on the Open Compute Project.
"The activity in HP around Open Compute and Project Moonshot are taking place by the same organisation... there are shared individuals," Turkel said, though he added: "I would not say they are targeted at the same thing."
But Turkel indicated that HP's work on the Open Compute Project's Open Rack technology, which ultimately aims to take power management and cooling and networking away from individual servers and put them into a rack, leaving just the processors behind, is something HP finds interesting.
"We are looking at potentially adding in... slightly different server form factors, potentially some different ways of doing power and cooling," he says, "that would be applicable and again that might cause differences in racking that may or may not look exactly like what the Open Compute Rack looks like."
Either way, Gemini is an important platform for HP and could be key the IT giant needs to open the datacentre doors of Facebook, Amazon, Google and friends.