Intel draws battle lines against AMD and ARM in fight to control the cloud's brain

Intel draws battle lines against AMD and ARM in fight to control the cloud's brain

Summary: The world's three most significant chip companies have radically different approaches to the cloud, and which company's technology succeeds will dictate the cost and class of future workloads.

TOPICS: Cloud, Intel, Processors, ARM

To a consumer, a processor is the brain of the computer and all are much alike, but when it comes to the enterprise, different chips can have a huge bearing on the performance and cost of applications.

At this year's Intel Developer Forum in San Francisco, chip giant Intel gave further details on its future chips — announcements that saw it take a different approach to the cloud to its rivals AMD and ARM.

Intel spent much of the show emphasising its upcoming Haswell processorand not talking about the stuff outside the processor, like the networking fabric.

At IDF 2012, Intel gave further details on its Haswell processor. Image: Jack Clark

Meanwhile, AMD brought out a server that married a high-end 'Piledriver' Opteron chip with a bespoke ASIC-based networking fabric from recent acquisition SeaMicro. 

Elsewhere, ARM argued that "the nature of servers has changed" and that therefore Intel's commitment to single-threaded performance above parallelism in its mainstream processors could be a misstep.

What these three views represent is three different takes on how to tackle the problems — and opportunities — posed by selling chips to large datacentre operators that either run their own clouds, or deliver technology in an 'as-a-service' format.

With Haswell, Intel is sending a message that it is still betting on single-threaded performance, while AMD is staking its datacentre future on a combination of chips with large amounts of memory and a very fast networking fabric that lets you cluster lots of chips together.

ARM, however, is the wild card. While there's a year or so to go till its 64-bit chips debut on servers, there is already palpable enthusiasm for the ARM chips in the industry and the power advantages could make them impossible to ignore. Facebook is understood to be evaluating the processors, and other companies with large datacentre estates are likely to be doing the same.

Cloud makes power, not performance, a priority

In the coming years, it's probable that cloud companies — the Googles, Facebooks and Amazons, et al — will become increasingly significant buyers of processors, while enterprise buyers will fall back as they hand over more of their own IT to suppliers, following the growth of cloud computing and 'as-a-service' technology.

The chips the cloud companies want are not necessarily the chips that Intel makes. For these companies, their priorities are performance-per-watt and, to a lesser extent, a fast input-output layer. Enterprises on the other hand prefer to buy servers with guaranteed support and externally developed management software, such as the boxes made by HP, IBM and Dell.

At scale, the electricity a chip consumes becomes one of the key factors in making a buying decision. With power a priority, Intel could be on the back foot.

Haswell: Rectifying Intel's sins

"With Haswell, [Intel] is continuing to try to rectify the sins of their heritage — performance at all cost," Mark Davis, chief architect at ARM-based server vendor Calxeda, says. "Haswell is starting to incorporate some of the basic power management mechanisms that ARM and ARM system-on-a-chip vendors have been refining over an extended period of time."

"Haswell is starting to incorporate some of the basic power management mechanisms that ARM and ARM system-on-a-chip vendors have been refining over an extended period of time" — Mark Davis, Calxeda

Even Intel rival AMD recognises this — in June AMD became an ARM licensee and plans to add dedicated RISC cores to handle security tasks onto its x86 processors. 

It's a path that AMD is following as it sees a heterogeneous future for chip design. "A general purpose processor core can do anything, but for the best results you probably need a hammer for some jobs and a scalpel for others," John Williams, vice president of servers for AMD, said. "AMD has already embraced ARM technology on security solutions. Our focus is to provide the best solution for the workload in question."

So, when you look at both ARM and AMD you find a clear recognition of the importance of low-power chips. Both Calxeda and AMD are also attempting to bring fast networking fabrics to their processors. The main difference here is that AMD's chips are x86, so they can run legacy code, while ARM chips can run the LAMP stack, but old software needs to be ported over.

Both companies are moving to heterogeneous chip architectures: AMD via the ARM integration, and ARM via its Big.Little architecture, which packs a powerful processor and a weak processor onto the same die. 

Intel has gone the other way, with its 50+ core Xeon Phi coprocessor, which works best paired with an Intel Xeon chip via PCIe. However, Xeon Phi's launch is months away and it is, at this stage, mostly targeted at supercomputers.

Along with this, both AMD and ARM are keen on microservers — dense servers that pack a bunch of low-power chips together with good connectivity. Intel, on the other hand, has a muted enthusiasm for the technology.

"So far we don't see a significant line of sight to microservers as a relevant technology [for HPC]," John Hengeveld, director of marketing for Intel's high-performance computing group, told ZDNet.

AMD and ARM go one way, Intel goes another

Drawing these threads together it seems that while AMD and ARM are heading in one direction, Intel is going in another.

Intel's background is in homogenous chip architectures and though it likes to make an argument that when you pair a Xeon Phi with a Xeon you end up with a heterogenous chip that could theoretically go to work in the cloud, it doesn't have much evidence to support such a contention.

AMD and ARM, meanwhile, are rapidly adopting and developing technologies to drive down the relative power consumption of their chips while assuring good connectivity.

Over the next few years, either Intel or AMD/ARM will have their strategy vindicated. In the future Intel world, power consumption will still be relatively high, but cloud operators will have access to beefy amounts of computing power. In an AMD/ARM world, overall computing power will be slightly lower, but power costs will be dramatically cheaper and the data layer will be much faster.

What happens will determine not only the ways in which cloud operators design their internal software systems, but will ultimately change the cost of providing cloud services. Hold tight, a storm is brewing. 

Topics: Cloud, Intel, Processors, ARM

Jack Clark

About Jack Clark

Currently a reporter for ZDNet UK, I previously worked as a technology researcher and reporter for a London-based news agency.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • Coolest Headline Ever

    Love the headline of this story--it sounds like a plot for a science fiction movie.
  • Intel strategy getting very tired.

    Since the 90s intel have played by the same old strategy. Back then it was play the clock speed game, which they won, at the expense of heat, reliability and consumption. They are still doing this while AMD leapfrogs them in low cost, low power "enough cpu, and a decent gpu" packed on a single chip.

    Now the cheese has moved again - datacentres need masses of "partitions" of cpu + data, where for the most part their users are restricted by network bandwidth or other IO bottlenecks, and Intel is betting on CPU horsepower to win this race too!

    I see ARM owning the datacentre in a surprisingly short time (5-8 years), while AMD move into a strong second in both server and desktop space.

    It will only be so long before the masses realize the cost/benfit ratios of the competing solutions.
  • May both win.

    "Intel draws battle lines against AMD and ARM in fight to control the cloud's brain"

    May both win.
    • Take yr point about dramatisation

      Hey CobraA1
      I recognise your point that the press tends to overly dramatise conflict between tech companies - there won't be a clear winner, but whoever becomes the majority large cloud supplier will have significant influence. As always, thanks for commenting.
      Jack Clark
      • Also a comment about monopolies

        It's also a comment about monopolies - if one of them "wins," innovation tends to lose, as a monopoly prefers to keep the status quo rather than to innovate.
  • Common sense says...

    Don't bet against Intel. You'll lose in the end.
  • Yeah

    The industry relies on Intel and the quality they put forth. AMD is just the low cost option, and usually lacks technically in one way or another. Not to mention being the low cost option it draws misers that buy other low cost and usually lesser quality hardware to surround the AMD CPU, which ends up further reducing the perception of quality of AMD products.
    • Rubbish

      • Yeah?

        Explain plz.
  • all depends on how Software evolves/matures...

    the massively parallel, high bandwidth interconnected, data localized systems are of very special interest... because within a few years the data load is going to explode....
  • Nice information but unnecessarily slanted

    "In the future Intel world, power consumption will still be --relatively high--, but cloud operators will have access to --beefy amounts-- of computing power. In an AMD/ARM world, overall computing power will be --slightly-- lower, but power costs will be --dramatically-- cheaper and the data layer will be much faster".
    - While you're obviously biased in your wording, it is an unsupported and unnecessary prediction. If Intel provides "beefy" computing resources, then the others' would not be only "slightly lower", and power costs being "dramatically" lower is clearly a forecasting bias. Further, Intel's strategy would likely seek a reasonable tradeoff in resources versus power, and is able to employ feature reduction, unpublished research, and so on.
    - Sorry for the rant, but it's hard to read an article that obscures good information with needless attempts at persuading the user to come to your own far-too-early predictions. I'd prefer to await and respect your articles.