AMD takes ARMs against the sea of Intel

AMD takes ARMs against the sea of Intel

Summary: After years of cutbacks, changes at the top and troubles with its foundry partners, AMD has launched a RISCy plan to use ARM chips to take on Intel.

SHARE:
TOPICS: Cloud, Intel, Processors, ARM
12

In the competitive world of computer chips desperate times can call for desperate measures, especially when every chip vendor has an interest in spreading fear, uncertainty and doubt (FUD) about their rival's gear.

ARM chip

Buy one chip and you risk dooming your organisation through reduced software support (Itanium). Buy another and you'll never get the performance you need (ARM). Purchase something different and perhaps you're dicing with chips whose expense comes from the manufacturer's low yields with a fabrication process that won't knuckle down (AMD).

In the past couple of years this FUD has grown from a breeze into a howling gale as Intel, AMD and ARM try to convince the world that their chips are best. The intensity is growing because the rise of the cloud has fundamentally altered the market for chipsellers, and all these companies can smell money in the datacentre refreshes of cloud operators like Google, Facebook and Microsoft.

Intel, with its record revenues, huge fabrication facilities and cosy partner relationships, has dominated the conversations of the chip industry.

On Monday, AMD got sick of this and announced a significant deal with ARM, which will see it produce chips designed by another chip company to try and disrupt Intel.

One of the main arguments Intel likes to use for why its chips are better than any produced by the competition is that they have a combination of high performance along with energy efficiency thanks to leading process technologies.

All about (low) power

But this argument may not work in the datacentres of the new IT environment. Cloud datacentres typically run applications confortable running on thousands of relatively weak cores, rather than a select few powerful ones. This model favours the low-power reduced instruction set (RISC) chips of ARM over the complex instruction set (CISC) chips of Intel and AMD.

AMD's affirmation of ARM's importance to the server market can be read in two ways — either the company really believes ARM has a chance of gaining a significant amount of market share here, or AMD is grasping at one of the few straws left available to a company shrunk and demoralised by successive cutbacks, market losses and leadership changes.

The truth, I think, is a little bit of both: yes, ARM has a slew of low-power benefits stemming from its RISC architecture that means its chips can have better electricity consumption than Intel. On the other hand, it does not have much of a software ecosystem yet and, though there have been a couple of announcements, we are still a year away from seeing any kind of ARM 64 chip in the market.

Meanwhile Intel is cranking out chips made to its advanced 22nm tri-gate process — something that AMD and ARM's chip fabbers TSMC, GlobalFoundries and Samsung, are yet to bring in — and is preparing to make chips to its even better 14nm process.

Knife fight

Ultimately the argument can be summed up as this: Intel accuses ARM server makers of bringing a knife to a gunfight, while ARM server makers think that's fine because for every single gun they have around four to eight knives.

What's more, as around 50 percent of the total cost of a modern datacentre comes from power consumption, ARM chips' energy-thrifty nature can translate to huge cost savings over the long term.

As long as the software community gets behind porting applications over to ARM, AMD's RISCy gamble could lead the company to record revenues in the cloud, or bring it crashing down to earth. Over the next five years, we should find out which is the case.

Topics: Cloud, Intel, Processors, ARM

Jack Clark

About Jack Clark

Currently a reporter for ZDNet UK, I previously worked as a technology researcher and reporter for a London-based news agency.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

12 comments
Log in or register to join the discussion
  • One thing is false. The "fewer steps" thing.

    "and, yes, its RISC architecture lets it carry out tasks in fewer steps than Intel chips."

    False.

    RISC is all about letting the software, rather than the hardware, do the brunt of the work.

    RISC stands for "Reduced Instruction Set Computer." Which means it has less instructions available. Which means that you will need MORE instructions to accomplish many of the tasks that CISC machines can accomplish in a single instruction.

    RISC is not a win-win in all respects. It is a tradeoff. You gain power savings at the expense of raw performance.

    It is this tradeoff that favored CISC in the first RISC vs CISC wars, and it is this tradeoff that favors RISC in the second.

    In the early days of computing, power consumption simply wasn't a factor, and CISC was gaining lots of technological ground (pipelining, superscaler processors, etc). Ultimately, CISC basically became a series of RISC cores with translation layers to make it look like a CISC machine - a technique still in use today.

    RISC doesn't need those translation layers, and thus uses less power. The translation is done in software, which is less efficient and will certainly not give you the same raw performance at the same clock speeds. It will only give you a better power/watt ratio.
    CobraA1
    • Whom are you arguing with.

      You wrote;

      '"and, yes, its RISC architecture lets it carry out tasks in fewer steps than Intel chips."

      False.'

      But the comment you made in quotation marks isn't even the article. So you're arguing with yourself.

      Kudos.

      Did you win?
      Bozzer
  • RISC only looses to CISC

    when you have to do legacy Windows operations. When you are freed from that mess, you get a startling fast machine that is vastly superior.
    Tony Burzio
    • I've already explained the tradeoffs in the architecture . . .

      I've already explained the tradeoffs in the architecture and why they happen. Not sure how your point is relevant. Sure, legacy Windows can be a drag, but that is an OS issue, not a hardware issue.
      CobraA1
  • RISC vs CISC

    RISC instruction set is NOT optimised for work station computing. x86 is. That is well known, however RISC is perfectly adapted for server appllications which is just moving chunks of data around. But what EVERYONE is missing is this. An ARM RISC core WITH hundreds of Radeon gpu cores on die can make it a formidable workstation cpu. This is well known and understood by the HPC crowd.

    Additionally, I think that we can expect AMD to optimise ARM64 with Radeon on die for mobile applications. This could get them into the mobile smart phone ecosystem before Intel as Intel has NO credible on-die graphics design. Their present gpu is several generations behind Radeon NOW. Since AMD is NOT standing still regarding the evolution of Radeon, I doubt that Intel could reach parity with Radeons best within 5 years.

    Is this a RISCy play by AMD? Of course not. It is necessary for the survival of the company. Adapt or die. This move was planned with the acquisition of Sea Micro and with the "limited" ARM license for a security chip. That was rather transparent.
    RAV555
    • nVidia is playing a role as well, can't forget them.

      "An ARM RISC core WITH hundreds of Radeon gpu cores on die can make it a formidable workstation cpu."

      As could an x86 with an equivalent nVidia GPU, which can easily have hundreds of cores as well.

      "I doubt that Intel could reach parity with Radeons best within 5 years."

      Intel's always been terrible with GPU anyways, as any gamer will tell you. They optimize their GPUs for watching movies, which won't tax a system nearly as much as a full screen game. They're years behind both AMD and nVidia in graphics technology, and I agree they're not gonna catch up any time soon.

      Interesting that you mention AMD graphics (formerly ATI), but failed to mention their real competition: nVidia.

      nVidia's Tegra is in many of the upcoming Windows RT tablets, including the Surface RT. Only the x86 version of the Surface is getting an Intel GPU. The ARM version is getting nVidia GPU technology.
      CobraA1
      • The real reason?

        Could it be that the Tegra, with it's excellent graphics capabilities is the real reason behind the ARM/AMD partnership? Maybe, at some point in the future, nearly all of AMD's offerings will be an ARM based CPU with an on die Radeon GPU. This would compete with both of their main rivals at the same time, Intel and Nvidia.
        dch48
  • i've been surprised by the processing power of arms (and low power required

    I have a dual core arm android tablet which cost $80. it's 3d gaming is pretty awesome, streams hd to the tv. it's hard to imagine that the code is actually being run by a jvm. I tend to use it all the time instead of a pc at home. (fast to turn on/off, fine at browsing web). I also have a $80 nas server with an arm, and it's fast enough to stream movies. the cool thing is both only sip a few watts of power. run a pc server for a year and it costs hundreds of bucks in electricity, and takes a lot of space. it's a slam dunk for me, scale up what I've got and you could get the same performance with a much smaller datacentre sipping a hundred times less power. worldwide this could mean hundreds fewer power stations.
    as electricity gets more expensive, this will matter more and more.

    15 billion arm cores have been lmanufactured to date, and 98% of the one billion cellphones manufactured each year since 2005 have had arm cpus in them..
    stevey_d
    • i haven't checked out x86 assembler recently

      I don't expect this to have changed, since the architecture of x86 would have to change, but doing simple things like a floating point operation followed by a conditional jump based on the result involved doing the op in the floating point unit, moving result to memory, then moving it from memory into the cpu, then doing the test.
      incredibly expensive, by comparison the average risc with floating point instructions just does the op, tests it.
      the x86 legacy can really make things horrendously inefficient. this is why people are doing math in the gpu, because the cpu is so kludgey an inefficient.
      stevey_d
    • I agree

      I have a Nexus 7 tablet with a Tegra 3 ARM based quad CPU and I too am amazed at the speed and power. It does things like web browsing faster than my X86 machines and plays videos just as well if not better. It also does an amazing job with the 3D games I have tried. It's only rated at 1.3 ghz but it blazes through everything I have thrown at it.
      dch48
  • RISC/CISC corrected

    Hello all,
    As pointed out, some sloppy language in this article indicated RISC requires fewer steps to do comparable things as CISC. This has now been corrected. Thanks for reading and commenting.
    JC
    Jack Clark
  • Once the first knife wielder loses his head the other 3 to 7 knives drop

    to the floor among the pitter patter of hysterically retreating footsteps...

    Once Intel addresses their power issues I suspect many developers will trade up to more modern equipment.
    T1Oracle