X
Business

Microservers: What you need to know

Businesses are experimenting with clusters of high-density, low-power servers known as microservers, which are suited to the growing number of hyperscale workloads found inside modern datacentres. Here's why they matter.
Written by Nick Heath, Contributor

For decades, servers have been the general-purpose workhorses of the datacentre. These boxes have proven to be jacks of all trades, able to run operations for organisations of every shape and size.

But some businesses don't want a machine that can do everything reasonably well; instead, they want a computer that excels at specific tasks.

Microservers are a new category of system designed to shine when carrying out these well-defined computing workloads.

The need for microservers has in part been fuelled by the growth of the web and online services. That's because the demands that serving this kind of content place on a system — the CPU load and I/O required to deliver static elements for a web page, for example — is predictable.

The quantifiable nature of these workloads allows microserver circuitry to be pared back to what's needed to execute these tasks.

"We call them application-defined servers, as opposed to general-purpose servers," Paul Morgan, HP's hyperscale business manager for industry-standard servers told ZDNet about HP's Moonshot microservers last year.

"With Moonshot we take the application first, and then custom build the cartridge specifically for that application. In doing that we get the best price per performance per watt," he said.

Serving web content is a good example of the type of workload that microservers are suited to for another reason — it's a 'scale-out' workload.

Scale-out workloads are tasks that can be carried out in parallel, allowing them to be split between multiple computers. As demand for scale-out workloads grows, so more servers can simply be added to meet it.

Typically, as clusters of servers grow so do the amount of resources that go unused. But because the architecture of microservers is already stripped back to the particular components suited to a designated task, the resulting inefficiency is lessened.

The narrow focus of microservers means their design typically differs from that of a mid-range server powered by a higher-end Intel Xeon or AMD Opteron.

HP%20Moonshot[1]
HP's Moonshot microserver cartridges are packed into a high-density chassis. Image: HP

As the name suggests, microservers are small. HP is able to cram more than 400 of its HP ProLiant m300 server cartridges into a single 42U rack, for example, and each of these cartridges can support multiple server nodes.

The ability to cram so many servers into a single rack affords the opportunity for these servers to share infrastructure, and in doing so increase their efficiency and be simpler to manage. For example, Moonshot cartridges share power, cooling and networking embedded into the chassis — a Moonshot 1500 chassis has 180 1Gbps lines, one for each of the server nodes when the chassis is maxed out with 45 quad-server cartridges.

The power consumption of a microserver is far below that of a high-end general-purpose system — for a start, a microserver SoC (System on a Chip) typically has a TDP (Thermal Design Power) of 20 watts or below, compared to 90W-plus for a high-end server. As well as using lower-power processors, energy savings are also achieved by circuitry related to networking, cooling and power supply being pushed to the microserver chassis.

HP has released figures claiming that in testing 1,600 of its Project Moonshot Calxeda EnergyCore microservers, built around ARM-based SoCs, packed into just half a server rack were able to execute a light scale-out application workload that took 10 racks of 1U servers — reducing cabling, switching and peripheral device complexity. The result, according to HP, was that the workload used 89 percent less energy and cost 63 percent less.

form-factors
How microservers fit alongside other server form factors. Image: Intel

Microservers typically use processors that are not usually associated with servers — energy-frugal SoCs more commonly found powering mobile devices that need to maximise battery life.

Intel- and ARM-based SoCs are most commonly found inside microservers. The 64-bit Intel Atom 'Avoton' C2750, a 2.4GHz processor found inside the HP's ProLiant m300 microserver cartridge, has a TDP of 20W and is part of the C2000 family of SoCs also designed to power tablets and smartphones. Meanwhile, Calxeda's ECX-2000 SoC is built around 1.8GHz quad-core 32-bit processors based on ARM's Cortex A15 design, which is used in a range of chips designed for smartphones and tablets, and has a TDP of around 12W.

SeaMicro, which is owned by AMD, has also demonstrated how the dense compute cluster model can be extended to more powerful and power-hungry processor parts, offering the option to use an Opteron processor with eight 'Piledriver' cores, running at up to 2.8GHz, as the CPU inside one of its SM15000 appliances. The same appliance is also available with quad-core Intel Xeon E3 processors running at up to 3.7GHz.

What are microservers being used for?

The range of workloads handled by microservers is broadening. The first generation was focused on relatively CPU-light tasks, such as serving static elements on web pages, but the second generation employed a wider range of more powerful (but still energy-frugal) SoCs — and, importantly, added support for 64-bit processing and more memory. This expanded microservers' capabilities to tasks such as serving dynamic web elements (those updated by AJAX, for example), serving hosted desktops and digital signal processing for telcos.

microserver-workloads
A summary of workloads suited to first-generation microservers. Image: Intel

Errol Rasit, research director for data center dynamics at Gartner, said the number of tasks suited to being scaled across microserver clusters would continue to grow as new workloads, such as big data analytics, emerge.

"It has an applicability to some of the big data analytics workloads where you're looking at extreme scale and parallel processing. So a NoSQL-type environment that is computationally light but throughput-heavy, for example. These sorts of processors could be a good fit for that style of architecture."

A number of microserver clusters already target the big data market, such as AMD SeaMicro SM15000 appliances, which AMD describes as 'Hadoop in a box', due to its certification to run CDH4, Cloudera's distribution of Apache Hadoop Version 4.

Big-name customers using microserver clusters in production are relatively scarce, with the most prominent example probably being Verizon, which uses SeaMicro's SM15000 appliances to underpin its global cloud platform. More companies are testing microservers, though: PayPal, for example, has been trialling using a small number of HP Moonshot microserver cartridges as part of a big data platform. The payment provider used the cartridges to output data into Hadoop Distributed File System flat-files and into HP's Vertica Analytics Platform in a single stream.

Although the microserver market is expected to grow, their future use is likely to be limited by the number of homogenous workloads carried out by many businesses. Most businesses will still need general-purpose servers to execute mixed workloads — running a database or hosting enterprise line-of-business applications, for example.

The processing power of CPUs inside most microservers is also limited in comparison to higher-end server processors, such as Intel Xeon E5s and E7s, so microservers won't be suited to carry out computationally-demanding workloads if those tasks can't be split into smaller jobs and carried out in parallel on a microserver cluster. This restriction is why initial uses for microservers have been limited to tasks where individual CPU load is low but I/O throughput is high, such as serving static elements on a web page.

The challenge and cost of adapting software to efficiently distribute workloads between microservers will be another obstacle to their uptake. Putting workloads on these clusters will require software to be rewritten to enable microservers to process the task in parallel, according to Google's senior VP of operations Urs Holzle (PDF), which may offset the savings in capital and operational expenses.

Having said that, the number of servers running homogeneous workloads is liable to increase over time, as new software that better exploits parallelism comes to the fore, as companies increase their use of scalable workloads like big data analytics, and as more workloads migrate to cloud services focused on running specific computing workloads.

Another factor that may slow adoption is that, on the surface, microservers are solving a problem that's already been dealt with by virtualisation. Like microservers, virtualising general-purpose servers provides a way to match physical compute, storage and networking to demand.

The benefits of moving homogenous workloads from virtualised general-purpose servers to a microserver cluster may not be immediately apparent, however.

"This is a very challenging comparison to make for many end-user organisations because if power is their main driver then yes they can get a lot of efficiency out of virtualising a mainstream product," said Gartner's Rasit. "For a large mainstream audience, the virtualisation capabilities on a Xeon or Opteron product is probably going to be the status quo for them."

"You have to really look at power draw of these systems and look at power draw of the low-energy systems and see if there is a comparison. This is why it's one step too far for many organisations. There may or may not be a benefit, but you have to really do some research in order to understand if there's a benefit or not," added Rasit.

One possible advantage of microserver clusters over virtualised servers is availability. Rasit gives the example of a SeaMicro box with 512 Atom SoCs compared to a two-socket Xeon server, such as the HP DL380, running 512 VMs. Both boxes could be used to serve web content, but if one of the SeaMicros SoCs goes down there are still 511 server nodes serving content, while losing a node on the Xeon server will take out far more VMs.

Microservers are only a tiny part of the server market at present, and are only forecast to grow to a still modest five percent of the revenue from the global server market by 2017, according to Gartner. Analysts IHS iSuppli are slightly more bullish on their prospects, predicting that microservers will account for 10 percent of servers shipments by volume by 2016.

isuppli
Predicted growth in microserver shipments. Image: IHS iSuppli

"The market is still in an emerging stage," said Gartner's Rasit. "We see inertia there. The marketplace is fairly conservative in nature, so any shift in paradigm can take five to 10 years to really mature. Look at the introduction of blade-based servers, which isn't a huge leap from rack-optimised servers but took at least the best part of the last decade to be accepted into mainstream organisations."

"We don't expect it [microservers] to be an overnight success, but the opportunities are for specific flavours of workloads that can provide optimisation. But also we have to recognise the struggles that most enterprises have to deal with. Their number one consideration is just keeping the lights on in most cases," added Rasit.

Alongside HP, Dell is the other main OEM to have announced a microserver lineup, and in 2012 began offering its Copper servers, based on the Marvell Technologies Armada XP processor.

Dell announced earlier this year that it is making a microserver based on a 64-bit ARM SoC available for customers to test this year. The microserver will utilise Applied Micro's X-Gene, which is based on ARMv8 architecture.

Alongside its work with ARM, Dell has also showcased an Intel-based microserver designed for cold storage. The DCS 1300 is based on an Intel Atom SoC from its C2000 product family and offers up to 12 hot-pluggable 3.5-inch drives in a 1U chassis.

AMD made a public commitment to microservers in 2012 when it bought SeaMicro, which specialises in making servers designed to be packed into ultra-dense, low-power clusters. AMD is fairly vendor-agnostic in the processors it packs into SeaMicro appliances, offering its SM15000 Fabric Compute Systems with both AMD and Intel x86 processors.

Outside of HP, Dell and AMD SeaMicro are a number of less well-known systems builders making Intel and ARM-based microservers. These include Penguin Computing and Boston, as well as start-ups like Servergy, which is building microservers based on Freescale's Power architecture.

The future of the datacentre

Enterprise servers are traditionally standalone computers, incorporating their storage and networking into their motherboard. By contrast, the architecture of microserver clusters represents another step towards pushing storage and network interfaces away from individual servers.

Disaggregating server motherboards in this way, in favour of creating separate rack-level infrastructure pools — of compute, storage and networking — points to a potential model for the future of the datacentre that will be enabled by fast data interconnects such as silicon photonics, according to Gartner's Rasit.

intel_light_7
The future potential of silicon photonics. Image: Intel

"You wouldn't think of it as pieces of hardware, you'd think of it as pieces of compute, memory resource or I/O resource. The idea is the workload is defined and resources are built based on that workload's requirements," he said.

"What will start to enable that is silicon photonics, which will begin to allow the disaggregation of the systems at scale and at a distance. As photonics starts to come more mainstream the costs are lowered, and that's when we will potentially start to see disaggregation."

A microserver product that exemplifies the approach of turning discrete servers into a fabric of compute, network and storage is SeaMicro's SM15000 appliances, which lashes the available CPUs, storage and memory to a programmable network layer providing fine-grained control over how resources are portioned out for workloads. The appliances' Freedom Supercomputer Fabric provides 1.28Tbps of bandwidth, supporting up to 512 CPU cores and five petabytes of storage in a single system.

Will Intel or ARM rule microservers?

Although it's too early to say whether Intel x86- or ARM-based SoCs have the upper hand in the microserver market, both chip designs have distinct advantages.

ARM's advantage over Intel may lie in the diversity of its ecosystem. ARM licenses its chip designs to hundreds of semiconductor companies, which build that design into their own chips before selling them into a broad spectrum of markets.

ARM's many partners can produce a wider range of microserver platforms, each of which can have a different design — additional networking capacity or interfaces for peripherals — suited to different computing workloads, be that serving web content or big data analytics.

Intel produces a wide range of SoCs based on its Atom chipset designs, offering more than 50 chips aimed at the server, storage and communications market based on its 64-bit Atom Avoton platform. However it's difficult for a single company to compete with an ecosystem of hundreds of chip-makers when it comes to the breadth of offerings.

"I don't believe that any one company can design all the chips on the planet. You need to have different people, different specialisations and expertise to scale from little microcontrollers to supercomputers. The business model provides the diversity, and that diversity is one of the reasons the ARM ecosystem has become so strong," Mike Muller, co-founder of ARM told ZDNet in 2012.

Yet the sheer number of companies producing SoCs based on ARM designs may not prove to be the factor that decides which chipmaker will dominate the microserver market. For microserver platforms to be sustainable they need to serve a computing workload carried out by a substantial number of customers. Providing hardware platforms to serve these limited number of widespread workloads is probably equally within the capabilities of Intel, as it is ARM's partners.

But the flexibility that ARM offers chipmakers to modify its designs and create a platform that serves a specific need could also prove decisive in winning support from some of the world's largest tech firms.

Google is rumoured to be considering designing an ARM-based chipset for its servers, for example, and Facebook has been experimenting with using ARM servers in production.

The move would fit with the work the two web giants already do to custom-fit their servers, datacentres and software stack to their workloads — Facebook through the Open Compute Project and Google on its own behind closed doors.

And where Google and Facebook are now, the wider world of enterprise world may follow, according to Gartner's Rasit.

"The greatest influencers of the datacentre today are these hyperscale datacentres. They show an indication of future direction," said Rasit. "They've got a huge number of engineers and a huge amount of technical expertise, and typically have direct control over software they write and deploy in their datacentres. These are all of the ingredients that speak to an early adoption of alternative architectures for optimisation.

"We're already seeing a lot of this architecture on trial in the webscale datacentres. But what we can't forget is that in those datacentres they have a fairly homogenous mix of workloads, whereas if you were to walk into any bank or retail environment it would be heterogeneous, so what happens in there is a good indicator of future direction, but doesn't guarantee future success."

Intel's reaction to demands for hardware suited to specific computing needs has also been to say it will offer bespoke chip designs to its biggest customers, adding instruction sets, specific clock speeds or customising interfaces on SoCs.

Intel also has a manufacturing advantage over ARM's partners, with Intel able to manufacture chips with a smaller die area — allowing it to cram more transistors onto smaller chips and increase processing power per watt. In the microserver space, Intel was also the first to introduce a 64-bit SoC with Error-Checking and Correcting (ECC) memory support, with the release of its Avoton series last year.

Microservers are taking over the datacentre - and Intel, ARM are ready for battle

In being first out of the door with 64-bit support in low-power SoCs Intel got a jump on ARM and its partners, as 64-bit processing and ECC support seems to be a requirement for enterprise-grade workloads. The importance of 64-bit support was recognised by HP, which chose the Intel S1200 family of SoCs for its first production Moonshot cartridges, rather than an ARM-based chipset.

However 64-bit ARM-based SoCs aimed at the microserver market are on their way, and HP plans to release a Moonshot cartridge based on a 64-bit ARM chip later this year, probably Applied Micro's X-Gene.

AMD's forthcoming Seattle chipset should boost the business credentials of the ARM platform. Now officially announced as the Opteron A1100, this will utilise four or eight AMD Cortex A57-based cores each running at more than 2GHz. The SoC will include many features aimed at enterprise-grade workloads: it's based on a 64-bit processor, has up to 4MB of L2 cache and a shared 8MB L3 cache, and supports up to 128GB of DDR3 memory. The chip also includes an eight-lane PCIe 3.0 controller, an eight-port 6Gbps SATA controller and support for 2 x 10GbE.

Another advantage Intel currently has over ARM is that most datacentre software was written to run on Intel's x86 architecture.

For many companies, switching existing workloads to run on top of ARM could require rewriting/replacing parts of their existing software stack, or waiting for someone else to do so, as well as retraining IT staff. While a range of software used in datacentres can now run on ARM — the LAMP stack, commonly used for serving web content, for instance, and Ubuntu LTS — significant parts of common enterprise software stacks are yet to be ported, including Red Hat Enterprise Linux.

Gartner's Rasit said the different implementations of ARM's chip designs by third-party companies could also complicate porting enterprise software to the platform.

"Some of the ARM designs will ultimately struggle to begin with because of that lack of a coherent software ecosystem," he said. "An Intel or AMD-based variant, because they're already an x86 architecture, are going to be able to run that huge ecosystem of applications."

While much has been made of ARM's ability to squeeze more processing power out of a watt of electricity in its chip designs, Intel has been making strides in reducing the power draw and increasing the efficiency of its processors, and still has a manufacturing process lead over ARM chip-makers such as TSMC. It looks as if the two companies may end up with SoCs with broadly similar efficiencies by the time ARMv8 64-bit processors are on the market — close enough that ARM's performance per watt may not be the reason that enterprises switch to ARM in the datacentre.

The fact that ARM in the datacentre isn't yet a reality for most firms hit home at ARM server chip designer Calxeda towards the end of last year, when it shut down for restructuring after running out of money.

People who had worked at the company said ARM-based servers didn't have the software support or hardware needed to win enterprise customers.

"In [Calxeda's] case, we moved faster than our customers could move. We moved with tech that wasn't really ready for them — i.e., with 32-bit when they wanted 64-bit. We moved when the operating system environment was still being fleshed out — [Ubuntu Linux maker] Canonical is all right, but where is Red Hat? We were too early," Karl Freund, Calxeda's former VP of marketing, told The Register.

Calxeda's difficulties, despite HP choosing Calxeda SoCs in its proof-of-concept Moonshot cartridges, demonstrates how limited the market still is for microservers, said Gartner's Rasit.

"You would have thought the support of the number-one systems and server maker in the world is pretty much a guaranteed win, but that really represents, in terms of Calxeda's demise, the stage of the market that we're at — we're still at a very early stage," he said.

Conclusion

Microservers are still in their infancy, and not yet widely used in production by enterprises. But there is growing business demand for application-specific hardware, as firms begin to look for new ways to carry out homogenous workloads at scale.

The spectrum of workloads that microservers can carry out is increasing, as more enterprises take on tasks that can be distributed across microserver clusters and processed in parallel, such as big data analytics. Microservers also provide an alternate platform for executing specialist jobs previously restricted to high-cost proprietary technology, such as digital signal processing. And as the capabilities of the low-power chipsets underpinning microservers grow, more tasks will become possible using the platform.

Perhaps more significantly, microserver clusters provide a glimpse of how the total compute, storage and networking within a datacentre may one day be disaggregated and reconfigured into flexible pools of resource, to be dipped into as needed.

Editorial standards