X
Innovation

Why AI will end Intel's processor dominance

The x86 architecture dominated computing in 90s and aughts, but ARM has won the mobile market. Now another market, AI, is about to escape Intel's grasp as well. A decade from now, x86 will be a niche architecture for legacy apps. Here's why.
Written by Robin Harris, Contributor

AI applications are fundamentally different from spreadsheets and word processors. The data structures are different. The I/O patterns are different. The math is, typically, 8 bit integer. The algorithms support massive parallelization.

That's only what we've learned in the last 7 years, since the key advances in deep neural networks were published. AI is still young, and there will be much more to discover about how to apply it, and what the compute requirements are.

AI is already huge, but is usually hidden in services we consume from Google, Facebook, Microsoft, and Apple. Recognizing faces in your photos. Speech recognition. Face ID. Map directions. Machine translation. All use forms of AI to perform their magic.

But AI can't remain only in the cloud. Self driving cars require local, high performance AI. The amount of mobile edge data makes it economic to do as much AI processing locally as feasible.

A rocket for your mind

Back in the 90s Steve Jobs considered computers bicycles for the mind. If that's so, AI is a rocket for the mind. And mobile devices are the launch pad.

Intel has less than a third of the 4.4 billion computers currently in use, with almost no presence in smartphones, the predominant platform. Perhaps Intel's processor dominance has already ended. But things are about to get worse.

AI workloads are enough different that a very different architecture is required to run them efficiently and swiftly. ARM-based systems are the leading platforms and a lot of research is going into integrating AI co-processors and key AI operations with ARM.

But even that doesn't look like it will be enough. Researchers are moving away from highly parallel GPUs to more efficient hardware. Why? Power efficiency.

As the recent paper Deep Neural Network Approximation for Custom Hardware: Where We've Been, Where We're Going puts it:

Research has shown that custom hardware-based neural network accelerators can surpass their general-purpose processor equivalents in terms of both throughput and energy efficiency. Application-tailored accelerators, when co-designed with approximation-based network training methods, transform large, dense and computationally expensive networks into small, sparse and hardware-efficient alternatives, increasing the feasibility of network deployment.

General purpose processors - let alone GPUs - simply can't keep up. That's the threat to Intel.

The Storage Bits take

To recap, Intel today has less than a third of the computer processor market. They whiffed on the mobile market. Which is surprising given that just a few years before they had to abandon power-hungry Pentiums for the notebook-first Core architecture for all their server chips. Yet even then Intel exec never imagined a world where hundreds of thousands of their chips would be in a single data center, and that billions of other computers would rely on small batteries.

The rapidly evolving AI data structures and algorithms makes Programmable Systems-on-a-Chip (PSOC) the hot area for experimentation. Most of those include ARM cores, a further disadvantage for Intel.

Intel can protect their shrinking market share by adding neural network features to x86 chips, but those aren't in smartphones. And their big cloud customers are perfectly capable of designing their own AI chips.

A decade from now Intel will still be a force in semiconductors. But they will be scrambling to keep up in an AI-powered world.

Editorial standards