Enter microservers: low-power servers tailored towards tasks that individually require relatively little compute power to carry out but which need to be performed in very high numbers; for example, serving static HTML elements on a high-traffic website.
Microservers provide a low-power alternative to higher-end Intel Xeon or AMD Opteron-based servers, whose architecture is geared towards jobs that require far more processing muscle than these light workloads require.
"If you think about how an x86 processor is typically architected today, there are hundreds of millions of transistors on there. A huge chunk of the real estate is involved in cache management and providing performance enhancements for the processor. When you're doing web page serving you don't use any of those transistors, so that's effectively cost and power consumption that give you no benefit," explains Dave Chalmers, chief technologist for HP's enterprise group in EMEA.
Microservers' power consumption is far below the 90W-plus thermal design power (TDP) of processors inside high-end servers, with microserver boards typically having a TDP of below 45W and dropping to sub-10W levels.
Smaller, cheaper, better?
These servers are generally based on small form-factor, system-on-a-chip boards, which pack the CPU, memory and system I/O onto a single integrated circuit.
Because of their small size, they can also be densely packed together to save physical space in the datacentre
Because of their small size, and the fact they require less cooling than their traditional counterparts, they can also be densely packed together to save physical space in the datacentre. Scaling out capacity to meet demand simply requires adding more microservers. Efficiency is further increased by the fact microservers typically share infrastructure controlling networking, power and cooling, which is built into the server chassis.
The upshot is servers that, when compared to alternatives, can cost less to run and take up less physical space in the datacentre; both key concerns for businesses with a sizeable server footprint.
HP has released figures claiming that 1,600 Calxeda EnergyCore microservers packed into just half a server rack were able to carry out a light scale-out application workload that took 10 racks of 1U servers — reducing cabling, switching and peripheral device complexity. The result, according to HP, was that carrying out the workload used 89-percent less energy and cost 63-percent less.
Who will use microservers?
Shipments of microservers will grow from a negligible number today but are likely to grow rapidly over the next two years.
Interest in microservers is mainly from web-hosting companies, according to Chalmers, who works on Project Moonshot, HP's initiative to develop low-power servers.
"We've even seen some of the big European telcos who want to go back to a much simpler model of dedicated physical hardware to provision for cloud-based infrastructure."
After years of server virtualisation, microservers are attractive in a different way, he says.
"Virtualisation is a very good thing but it adds a layer of management complexity. If the hardware's cheap enough, and these low-power servers are much cheaper, then you have the ability to add that in as well."
Microservers would seem to be a natural choice for web giants like Facebook. They need to serve a high volume of light scale-out workloads, have the buying power and the control over their hardware and software stack needed to design their own infrastructure.
"When you're a company with the software resources and an understanding of how to specify platforms, you are open to pick the best performance per watt per dollar platform that's out there," says Ian Ferguson, VP of segment marketing at UK chip designer ARM.
Microservers aren't only suited to low-end online content delivery, they also lend themselves to data processing tasks where the workload can be parcelled up and operated on in parallel, such as certain analytics jobs, as well as handling data in non-relational databases.
"They are designed for very high levels of parallelism, you can have tens of thousands of them chewing away at a problem. Even if each individual core is less powerful than a current generation Xeon, the fact you can have thousands and thousands of them, if the problem is of the right shape, means you can have really efficient use of the technology," says Chalmers.
Microservers will also increasingly be designed to handle specific types of computing tasks — be that powering search engines or running a certain SaaS offering, according to ARM's Ferguson.
"If you've got good enough technology to do the task, it then becomes a function of what technology you put around the CPU," he says.
Microservers also lend themselves to data processing tasks where the workload can be parcelled up and operated on in parallel
This customisation could take place at a number of levels. At the board level this could include bespoke configurations of processor cores, cache, memory and system I/O, or at a lower level changing the architecture of processor cores and integrating hardware accelerators for specific tasks into the CPU. This customisation would also extend to shared infrastructure in the server chassis, for example designing the surrounding network fabric to fit with the architecture of the microserver SoCs.
Chalmers says that HP is targeting such flexibility with the design of its Moonshot servers, which rely on swappable server cartridges whose components — a mix of CPUs, memory and system I/O — could be customised to carry out specific computing workloads.
"There is extreme flexibility in this style of technology. A cartridge may have multiple CPUs on it and be CPU-centric or it could have lots of memory on it and be memory-centric," says Chalmers.
"We do see an opportunity here for larger customers or industries to say 'What I'd like to have is the hardware configured exactly this way' and we can build that economically for them."
There are limitations, however, to what microservers can be used for, as their relatively weedy compute performance or memory footprint generally rules them out for handling mainstream enterprise IT or advanced scientific or technical computing workloads. For these kind of tasks traditional servers running higher powered chips like the Intel Xeon E7 are needed.
Another obstacle to the adoption of microservers is the challenge and cost of adapting software to efficiently distribute workloads between microservers. Putting workloads on these clusters will require software to be rewritten to enable microservers to process the task in parallel, according to Google's senior VP of operations Urs Holzle (PDF), which may offset the savings in capital and running expenses.
He also warns of the dangers of running into a performance barrier, where a workload designed to be carried out in parallel using microservers today might require more performance in future but be unable to be parallelised further — thus ruling out the continued use of microservers.
Another area of risk is networking, as deploying a large number of less powerful servers increases the number of ports required and switching overhead.
The battle for the microserver market
The growing demand for low-power servers has sent server OEMs and chip producers scrambling to carve out a slice of a new market.
HP has its Moonshot initiative, which will ship Intel Centerton and Avoton microservers this year, while Dell has been testing ARM-based servers for just over two years and last year began offering its Copper servers, based on the Marvell Technologies Armada XP processor.
Among chipmakers a new battleground has opened up, primarily between Intel and ARM. The two companies are going head to head with their low-power processors in the microserver market, Intel with its 64-bit Atom Centerton and its forthcoming 22nm Avoton SoC, and ARM and its partners with SoCs based on 32-bit Cortex A9 and the forthcoming Cortex A15 and 64-bit ARM v8 architecture. AMD will also release its low-power Kabini CPUs later this year.
ARM and Intel's chips both have strengths and weaknesses. Intel's Centerton includes many features missing from ARM-based SoCs to date, such as a 64-bit architecture and support for Error Correction Code memory. Intel also has the advantage that the existing software stack runs on its x86 architecture, while much server software is still to be ported to run on ARM's Risc architecture, despite recent progress in running server software stacks like LAMP and OpenStack. Meanwhile the power footprint of ARM's microserver SoCs, such as the Calxeda EnergyCore, appears to be below that of Centerton.
Whatever the outcome of these tussles the limitations in the type of tasks that microservers can be used for means they are not going to replace the traditional server inside the datacentre.
HP projects that while microservers will account for between 15 and 20 percent of server sales by 2015, traditional higher powered servers will still account for about 60 percent of sales. The analyst firm IHS iSuppli has predicted microservers will carve out an even more modest share of the server market, about 10 percent by 2016.
"Using lower power processors means they are cheaper, they use less datacentre real estate and they consume a lot less power. But they are only suitable for certain workloads, there are some things that they are not as good at as a standard x86 processor," says HP's Chalmers.
"We see microservers as incremental to, rather than a replacement for, traditional servers."