Cheat Sheet: Microservers

Cheat Sheet: Microservers

Summary: Here's what you need to know about low-cost, low-power microservers and their future role inside the data center.

SHARE:

Microservers are cheap, weedy and diminutive servers. They are servers whose parts have been shrunk and scaled back, allowing them to be packed into clusters.

And that's useful why?

Because not every computing task needs to be carried out by a multi-core brute of a processor. Some tasks individually need relatively little compute power, but need to be carried out in large numbers, and so can be more efficiently handled by large numbers of wimpy cores. Serving static HTML elements in a web page to millions of people, for instance, or the myriad individual compute jobs that make up a Hadoop Big Data analysis.

Why are microservers the better option?

There's less wasted silicon sitting on the server adding to the cost of buying and running the machine.

By removing features unnecessary for lightweight workloads (processor performance enhancements, for example), microservers can carry out these trivial workloads more efficiently than higher-specced alternatives.

The power consumption of microservers' stripped-back silicon is far below the 90W-plus thermal design power (TDP) of processors inside high-end servers, with microserver chips typically having a TDP of below 45W and dropping to sub-10W levels. Lower power consumption equals lower running costs and for the right use cases more useful computing work per dollar.

Microservers are generally based on small form-factor, system-on-a-chip (SoC) boards, which pack the CPU, memory and system I/O onto a single integrated circuit.

The small size of the boards allows tightly packed clusters of microservers to be built, saving physical space in the data center.

HP has released figures claiming that 1,600 of its Project Moonshot Calxeda EnergyCore microservers, built around ARM-based SoCs, packed into just half a server rack were able to carry out a light scale-out application workload that took 10 racks of 1U servers — reducing cabling, switching and peripheral device complexity. The result, according to HP, was that carrying out the workload used 89 percent less energy and cost 63 percent less.

Who will use microservers?

Web hosting companies are prime candidates, with HP saying that most of the interest in its microservers available through its Project Moonshot has been from hosters looking to streamline their large data center infrastructures.

Companies serving content over the internet at scale, such as Facebook and Google, are candidates for using microservers, as they not only have the need to carry out lightweight computing tasks many, many times at widely distributed locations, they also have the in-house technical expertise to engineer the hardware and software needed to run microserver clusters.

Large web companies like Facebook have also been testing microservers and various microserver designs have emerged from the Open Compute project.

As use of public cloud services grow so it is likely will demand for microservers suited to handling the lighter cloud service workloads.

dellPowerEdge
The Dell PowerEdge C5125 microserver sled. Photo: Dell

What are microservers' limitations?

Microservers don't have the compute power to effectively carry out more demanding computing tasks, such as enterprise IT, advanced scientific or technical computing workloads.

Rewriting software to run on microserver clusters can also be an overhead — writing software so it can split a task between multiple microservers and executed in parallel, for example. Another potential consideration is the additional network infrastructure needed to shuttle traffic between microservers and between clusters.

What does this mean for the server market?

Disruption. Chipmaker Intel, whose processors power more than 90 percent of servers today, faces competition in the microserver market from ARM, the UK firm that designs the processors inside the majority of mobile phones.

Both firms have low-power processors targeted at the microserver market. In one corner is Intel with its 64-bit Atom S1200 (Centerton) SoC and the forthcoming 22nm Avoton SoC, as well as its low-power Xeon E3 processors. In the other is and ARM and its partners with SoCs based on the 32-bit Cortex A9 and the Cortex A15, and the forthcoming 64-bit v8 architecture. AMD will release its low-power Kabini processor, which combines a multi-core CPU with a Radeon HD GPU, later this year.

The big server vendors HP and Dell, are designing new ranges of microservers, HP with its Project Moonshot initiative, which will shortly ship Intel Atom Centerton-based microservers and Dell, which sells its sub-65W, Intel Xeon E3-based Dell PowerEdge C5220 microservers.

IBM also plans to design the world's highest density 64-bit microserver server drawer for its IBM/Astron Dome partnership, with a target density of more than 100 nodes, 500 cores and 2TB of memory.

But with companies like Facebook and Google increasingly bypassing the traditional OEMs and designing their own custom server and data center hardware, ARM also sees courting organisation like Facebook directly as a way of getting its processors into the data center.

Which is the better choice? ARM or Intel?

There's no clear leader. While Intel's Centerton processors support important enterprise features, such as a 64-bit architecture and support for Error Correction Code (ECC) memory, early ARM-based SoCs, such as the Calxeda EnergyCore, appear to have a lower power consumption than Intel's microserver-targeted chips to date.

Intel has a clear lead over ARM-based SoCs when it comes to running server software, thanks to its dominant market position: a lot of server software that runs on Intel's x86 chip architecture needs to be modified to run on ARM's RISC platform. That said, there has been recent progress in porting server software stacks like LAMP and OpenStack to  ARM.

Despite analyst concerns that in squaring up to ARM in the low-power processor space Intel will take a hit on its 60 percent-plus margins, the company contends it will still make "very good" margins on its Atom S1200 and Avoton server chips.

What are the future uses for microservers?

Microservers are already specialised machines, customised for executing lightweight tasks, and as the market matures microservers tailored to even more specific computing workloads, such as running an industry-specific SaaS apps, are likely to emerge.

Custom architectures built around microservers are already available, such as the AMD SeaMicro SM15000's Fabric Compute System, which supports up to 512 CPU cores and 5PB of storage in a single system, linked by a bi-section bandwidth of 1.28Tbps. HP is also targeting a flexible design with its Moonshot microservers, which will have swappable server cartridges whose components — CPUs, memory and system I/O — can be tailored to specific computing workloads.

So the server is dead, long live the microserver?

No not at all, microservers are expected to sit alongside rather than replace traditional higher power, less specialised, servers.

Microservers are expected to account for about one fifth of server sales by 2015/16, and are seen as a new server type rather than an usurper of traditional machines.

Topics: Microservers: A Data Center Revolution?, Data Centers, Servers

About

Nick Heath is chief reporter for TechRepublic UK. He writes about the technology that IT-decision makers need to know about, and the latest happenings in the European tech scene.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

1 comment
Log in or register to join the discussion
  • Microservers - Future of server migration

    Thanks for the information, but isn’t it likely that this form factor would be the replacement form factor for the current heavy “lifting servers” as well down the road, since this is the “first” iteration of the micro servers, they will likely get smaller and better at dissipating the heat.

    Seems like a vented chassis with air being pulled from top to bottom would be more efficient than front to back to reduce heat and help improve server density down the road too?
    Cary McDonald