AMD takes ARMs against the sea of Intel
Summary: After years of cutbacks, changes at the top and troubles with its foundry partners, AMD has launched a RISCy plan to use ARM chips to take on Intel.
In the competitive world of computer chips desperate times can call for desperate measures, especially when every chip vendor has an interest in spreading fear, uncertainty and doubt (FUD) about their rival's gear.

Buy one chip and you risk dooming your organisation through reduced software support (Itanium). Buy another and you'll never get the performance you need (ARM). Purchase something different and perhaps you're dicing with chips whose expense comes from the manufacturer's low yields with a fabrication process that won't knuckle down (AMD).
In the past couple of years this FUD has grown from a breeze into a howling gale as Intel, AMD and ARM try to convince the world that their chips are best. The intensity is growing because the rise of the cloud has fundamentally altered the market for chipsellers, and all these companies can smell money in the datacentre refreshes of cloud operators like Google, Facebook and Microsoft.
Intel, with its record revenues, huge fabrication facilities and cosy partner relationships, has dominated the conversations of the chip industry.
On Monday, AMD got sick of this and announced a significant deal with ARM, which will see it produce chips designed by another chip company to try and disrupt Intel.
One of the main arguments Intel likes to use for why its chips are better than any produced by the competition is that they have a combination of high performance along with energy efficiency thanks to leading process technologies.
All about (low) power
But this argument may not work in the datacentres of the new IT environment. Cloud datacentres typically run applications confortable running on thousands of relatively weak cores, rather than a select few powerful ones. This model favours the low-power reduced instruction set (RISC) chips of ARM over the complex instruction set (CISC) chips of Intel and AMD.
AMD's affirmation of ARM's importance to the server market can be read in two ways — either the company really believes ARM has a chance of gaining a significant amount of market share here, or AMD is grasping at one of the few straws left available to a company shrunk and demoralised by successive cutbacks, market losses and leadership changes.
The truth, I think, is a little bit of both: yes, ARM has a slew of low-power benefits stemming from its RISC architecture that means its chips can have better electricity consumption than Intel. On the other hand, it does not have much of a software ecosystem yet and, though there have been a couple of announcements, we are still a year away from seeing any kind of ARM 64 chip in the market.
Meanwhile Intel is cranking out chips made to its advanced 22nm tri-gate process — something that AMD and ARM's chip fabbers TSMC, GlobalFoundries and Samsung, are yet to bring in — and is preparing to make chips to its even better 14nm process.
Knife fight
Ultimately the argument can be summed up as this: Intel accuses ARM server makers of bringing a knife to a gunfight, while ARM server makers think that's fine because for every single gun they have around four to eight knives.
What's more, as around 50 percent of the total cost of a modern datacentre comes from power consumption, ARM chips' energy-thrifty nature can translate to huge cost savings over the long term.
As long as the software community gets behind porting applications over to ARM, AMD's RISCy gamble could lead the company to record revenues in the cloud, or bring it crashing down to earth. Over the next five years, we should find out which is the case.
Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback
One thing is false. The "fewer steps" thing.
False.
RISC is all about letting the software, rather than the hardware, do the brunt of the work.
RISC stands for "Reduced Instruction Set Computer." Which means it has less instructions available. Which means that you will need MORE instructions to accomplish many of the tasks that CISC machines can accomplish in a single instruction.
RISC is not a win-win in all respects. It is a tradeoff. You gain power savings at the expense of raw performance.
It is this tradeoff that favored CISC in the first RISC vs CISC wars, and it is this tradeoff that favors RISC in the second.
In the early days of computing, power consumption simply wasn't a factor, and CISC was gaining lots of technological ground (pipelining, superscaler processors, etc). Ultimately, CISC basically became a series of RISC cores with translation layers to make it look like a CISC machine - a technique still in use today.
RISC doesn't need those translation layers, and thus uses less power. The translation is done in software, which is less efficient and will certainly not give you the same raw performance at the same clock speeds. It will only give you a better power/watt ratio.
Whom are you arguing with.
'"and, yes, its RISC architecture lets it carry out tasks in fewer steps than Intel chips."
False.'
But the comment you made in quotation marks isn't even the article. So you're arguing with yourself.
Kudos.
Did you win?
RISC only looses to CISC
I've already explained the tradeoffs in the architecture . . .
RISC vs CISC
Additionally, I think that we can expect AMD to optimise ARM64 with Radeon on die for mobile applications. This could get them into the mobile smart phone ecosystem before Intel as Intel has NO credible on-die graphics design. Their present gpu is several generations behind Radeon NOW. Since AMD is NOT standing still regarding the evolution of Radeon, I doubt that Intel could reach parity with Radeons best within 5 years.
Is this a RISCy play by AMD? Of course not. It is necessary for the survival of the company. Adapt or die. This move was planned with the acquisition of Sea Micro and with the "limited" ARM license for a security chip. That was rather transparent.
nVidia is playing a role as well, can't forget them.
As could an x86 with an equivalent nVidia GPU, which can easily have hundreds of cores as well.
"I doubt that Intel could reach parity with Radeons best within 5 years."
Intel's always been terrible with GPU anyways, as any gamer will tell you. They optimize their GPUs for watching movies, which won't tax a system nearly as much as a full screen game. They're years behind both AMD and nVidia in graphics technology, and I agree they're not gonna catch up any time soon.
Interesting that you mention AMD graphics (formerly ATI), but failed to mention their real competition: nVidia.
nVidia's Tegra is in many of the upcoming Windows RT tablets, including the Surface RT. Only the x86 version of the Surface is getting an Intel GPU. The ARM version is getting nVidia GPU technology.
The real reason?
i've been surprised by the processing power of arms (and low power required
as electricity gets more expensive, this will matter more and more.
15 billion arm cores have been lmanufactured to date, and 98% of the one billion cellphones manufactured each year since 2005 have had arm cpus in them..
i haven't checked out x86 assembler recently
incredibly expensive, by comparison the average risc with floating point instructions just does the op, tests it.
the x86 legacy can really make things horrendously inefficient. this is why people are doing math in the gpu, because the cpu is so kludgey an inefficient.
I agree
RISC/CISC corrected
As pointed out, some sloppy language in this article indicated RISC requires fewer steps to do comparable things as CISC. This has now been corrected. Thanks for reading and commenting.
JC
Once the first knife wielder loses his head the other 3 to 7 knives drop
Once Intel addresses their power issues I suspect many developers will trade up to more modern equipment.