The Atom processor has had a long gestation. It started in 2004, when work got going on the Bonnell core design project. This was a ground-up design of a brand-new x86 processor, intended as far as possible to put low power consumption first.
At the time, Intel had three main processor architectures: the Itanium, for high-performance computing; the x86 for everything mainstream; and ARM, for embedded products. Those included smartphones and handheld computers — large and growing markets, but ones where Intel wasn't competing especially effectively. The question Intel asked itself was whether it could extend the x86 into that area, bringing with it the huge advantage of compatibility with existing PC software.
It was a hard question. ARM has always been an exceptionally power-efficient architecture, with an instruction set explicitly designed for simple, fast decoding. This RISC (Reduced Instruction Set Computing) approach leads to simple, fast, low-complexity and thus low-power hardware. The x86, on the other hand, has an instruction set designed to provide a lot of powerful instructions with many options — CISC (Complex Instruction Set Computing). That makes the programmers' job easier (at least, it did in the days when programmers worried about instruction sets), but requires a large, complex and power-hungry processor.
The difference in approach becomes more pronounced when power considerations make it impossible to simply increase the clock speed for more performance: an already complex design becomes even more complicated as it adds more features like speculative and out-of-order execution. The Pentium chips of 2004 were heavily mutated from the original designs, and most of those mutations were aimed at speed, not efficiency.
So the Bonnell team threw everything away and started from the simplest possible x86 design, only adding features if they added as much or more incremental performance benefit as they increased power consumption. In particular, they concentrated on the idea that within every CISC is a RISC struggling to get out: while the x86 instruction set is lopsided and baroque, most of the instructions are at heart simple ones. Moreover, those are the ones that are most commonly used; x86 processors have always used this realisation, but it assumed new importance.
There are other advantages to simplicity. Testing becomes easier. More excitingly, the size of the chip goes down. Chip company profits depend entirely on the simple equation that it costs the same to process a wafer, whether that wafer has one enormous processor on it or several thousand tiny ones, but you can make a lot more money by selling several thousand tiny processors. Plus, one defect on that wafer will result in one processor not working: if that's one out of a hundred, that's a 1 percent failure rate: if it's one out ten thousand, it's 0.01 percent.
The early stages of the Bonnell design took place in Intel's traditional cloak of extreme secrecy. However, two public events signalled some success in low power, high performance thinking within the company.
In August 2005, Intel formally announced a new transistor design process called P1264. This dramatically improved leakage current (the single most important parameter that defines how much power a modern chip takes), reducing it by up to a thousand times over other contemporary designs. Even more importantly, Intel had learned how to tune that figure, and was able to trade off power consumption against performance to a fine degree. This opened up the way for common architectures that could span extremely low-power portable chips up to server-grade processors.
A year later, Intel sold its ARM-based XScale processor division, a move widely interpreted as signalling the end of the company's involvement in the mainstream embedded processor market. Instead, it marked a growing confidence that x86 could after all become a contender.