To a consumer, a processor is the brain of the computer and all are much alike, but when it comes to the enterprise, different chips can have a huge bearing on the performance and cost of applications.
At this year's Intel Developer Forum in San Francisco, chip giant Intel gave further details on its future chips — announcements that saw it take a different approach to the cloud to its rivals AMD and ARM.
Intel spent much of the show emphasising its upcoming Haswell processorand not talking about the stuff outside the processor, like the networking fabric.
Meanwhile, AMD brought out a server that married a high-end 'Piledriver' Opteron chip with a bespoke ASIC-based networking fabric from recent acquisition SeaMicro.
Elsewhere, ARM argued that "the nature of servers has changed" and that therefore Intel's commitment to single-threaded performance above parallelism in its mainstream processors could be a misstep.
What these three views represent is three different takes on how to tackle the problems — and opportunities — posed by selling chips to large datacentre operators that either run their own clouds, or deliver technology in an 'as-a-service' format.
With Haswell, Intel is sending a message that it is still betting on single-threaded performance, while AMD is staking its datacentre future on a combination of chips with large amounts of memory and a very fast networking fabric that lets you cluster lots of chips together.
ARM, however, is the wild card. While there's a year or so to go till its 64-bit chips debut on servers, there is already palpable enthusiasm for the ARM chips in the industry and the power advantages could make them impossible to ignore. Facebook is understood to be evaluating the processors, and other companies with large datacentre estates are likely to be doing the same.
Cloud makes power, not performance, a priority
In the coming years, it's probable that cloud companies — the Googles, Facebooks and Amazons, et al — will become increasingly significant buyers of processors, while enterprise buyers will fall back as they hand over more of their own IT to suppliers, following the growth of cloud computing and 'as-a-service' technology.
The chips the cloud companies want are not necessarily the chips that Intel makes. For these companies, their priorities are performance-per-watt and, to a lesser extent, a fast input-output layer. Enterprises on the other hand prefer to buy servers with guaranteed support and externally developed management software, such as the boxes made by HP, IBM and Dell.
At scale, the electricity a chip consumes becomes one of the key factors in making a buying decision. With power a priority, Intel could be on the back foot.
Haswell: Rectifying Intel's sins
"With Haswell, [Intel] is continuing to try to rectify the sins of their heritage — performance at all cost," Mark Davis, chief architect at ARM-based server vendor Calxeda, says. "Haswell is starting to incorporate some of the basic power management mechanisms that ARM and ARM system-on-a-chip vendors have been refining over an extended period of time."
"Haswell is starting to incorporate some of the basic power management mechanisms that ARM and ARM system-on-a-chip vendors have been refining over an extended period of time" — Mark Davis, Calxeda
Even Intel rival AMD recognises this — in June AMD became an ARM licensee and plans to add dedicated RISC cores to handle security tasks onto its x86 processors.
It's a path that AMD is following as it sees a heterogeneous future for chip design. "A general purpose processor core can do anything, but for the best results you probably need a hammer for some jobs and a scalpel for others," John Williams, vice president of servers for AMD, said. "AMD has already embraced ARM technology on security solutions. Our focus is to provide the best solution for the workload in question."
So, when you look at both ARM and AMD you find a clear recognition of the importance of low-power chips. Both Calxeda and AMD are also attempting to bring fast networking fabrics to their processors. The main difference here is that AMD's chips are x86, so they can run legacy code, while ARM chips can run the LAMP stack, but old software needs to be ported over.
Both companies are moving to heterogeneous chip architectures: AMD via the ARM integration, and ARM via its Big.Little architecture, which packs a powerful processor and a weak processor onto the same die.
Intel has gone the other way, with its 50+ core Xeon Phi coprocessor, which works best paired with an Intel Xeon chip via PCIe. However, Xeon Phi's launch is months away and it is, at this stage, mostly targeted at supercomputers.
Along with this, both AMD and ARM are keen on microservers — dense servers that pack a bunch of low-power chips together with good connectivity. Intel, on the other hand, has a muted enthusiasm for the technology.
"So far we don't see a significant line of sight to microservers as a relevant technology [for HPC]," John Hengeveld, director of marketing for Intel's high-performance computing group, told ZDNet.
AMD and ARM go one way, Intel goes another
Drawing these threads together it seems that while AMD and ARM are heading in one direction, Intel is going in another.
Intel's background is in homogenous chip architectures and though it likes to make an argument that when you pair a Xeon Phi with a Xeon you end up with a heterogenous chip that could theoretically go to work in the cloud, it doesn't have much evidence to support such a contention.
AMD and ARM, meanwhile, are rapidly adopting and developing technologies to drive down the relative power consumption of their chips while assuring good connectivity.
Over the next few years, either Intel or AMD/ARM will have their strategy vindicated. In the future Intel world, power consumption will still be relatively high, but cloud operators will have access to beefy amounts of computing power. In an AMD/ARM world, overall computing power will be slightly lower, but power costs will be dramatically cheaper and the data layer will be much faster.
What happens will determine not only the ways in which cloud operators design their internal software systems, but will ultimately change the cost of providing cloud services. Hold tight, a storm is brewing.