How Intel turned failure into success

After its management shakeup, Intel's Dadi Perlmutter is again on the rise - just nine years after his Timna system-on-a-chip project fell flat.
Written by Rupert Goodwins, Contributor
Intel's shake-up of top-level management marks the final stage in a major transformation of the company's strategy.

It is one that started nine years ago with a failure that threatened to derail the career of Dadi Perlmutter, the very same Intel executive who has been just been promoted to managing product development for all processors. Indeed, he has now seen as a leading contender for chief executive, once Paul Otellini steps down.

That failure was Timna, a design that evolved under Perlmutter's leadership at the turn of the century in Intel's development center in Haifa, Israel. Perlmutter hoped Timna would put the center on the map. Which it did, but not in a good way, at first.

Timna was a system-on-a-chip (SoC) design, which combined a processor with some of the other components necessary to build a PC. SoC in itself is no bad thing, and the key integrated component of Timna, a memory controller, is a very good thing to include on the same silicon as the processor itself.

How a processor uses memory is absolutely key to its performance: AMD has made a lot of hay by putting its HyperTransport memory circuitry on-chip. There are a lot of performance benefits if that integration is done right. It is also cheaper to make single chips, and cheaper to design them into computers.

Timna, however, had issues. There were three major flaws with the design — two conceptual, one practical.

The biggest one was a gamble that went wrong: Intel predicted that a new high-performance memory technology, RDRAM, would become popular and therefore cheap. The full story of RDRAM is for another day: it is enough to note that while it had plenty of technical promise, some very questionable business decisions by the technology's owner left it high and dry and overpriced long enough for other, more mainstream memory designs to catch up.

By the time Timna was coming to fruition, it was becoming clear that RDRAM was not going to be anything like competitive enough for the low-cost sector.

Then there was Timna's positioning. Timna was intended to strengthen Intel's hand in the low-end PC market, where it would compete mostly with Intel itself. That is a decision which is very hard to make, and just as hard to stick to. In general, it takes an outside force to make such moves happen, even in a company such as Intel that has some history of making market-changing moves on its own initiative.

If RDRAM had proved to be a potent, cheap technology, and Timna had given Intel a unique foothold in cheap, high-performing PCs, then the decision would have been seen as a bold move to forestall the external competition. Without those results, it became an exercise in killing the company's own margins.

Which brings us to the third terror of Timna: it did not work properly. The memory controller was reputedly flawed, and flawed in a way that could not easily be fixed. The systems worked after a fashion — by the time Timna was canned in October 2000, there were already motherboards being shown off, and preparations for launch were well underway.

Intel has never talked about exactly what happened: the most persuasive rumor is that with RDRAM failing to take off, the system had to work well with more standard parts, and part of the design just wasn't cutting the mustard. It was designed to be high performance and thus took a lot of power — but did not deliver. Instead, the company looked at the market, looked at the chip, and decided that enough was enough.

Which was bad news for Haifa and Perlmutter, who suddenly saw their future as just another satellite of Santa Clara. But Intel is at heart an engineering company, and good engineers learn from their mistakes.

The design team mulled over the lessons from Timna, in particular those about picking a high-margin market and designing for high performance at low power, and made some bold predictions.

Then they went into negotiations with Otellini, a process vividly described four years ago by Intel manager Shmuel Eden, also part of the push. "We did it the Israeli way; we argued our case to death," Eden is quoted by Bloomberg as saying. "You know what an exchange of opinions is in Israel? You come to the meeting with your opinion, and you leave with mine."

Two years later, they delivered Banias, the first proper notebook processor and the building block for the hideously successful Centrino platform. It was also the start of Intel's strategic move from speed demon to a company with an appreciation of efficiency. This move had long been mooted, but was harder to act on than accept.

The move now looks a company lifesaver. The lessons learned from Centrino have been absorbed into the mainstream Core 2 design and have seen their most extreme evolution in the Atom. They are now key to all of Intel's target sectors, from embedded systems through netbooks to high-performance server parts. And just in time: with ARM starting its assault on servers, and the mobile revolution gaining pace, the world is mutating faster than Intel's roadmaps — and it's low power all the way.

No wonder Perlmutter's star is in the ascent.

This article was originally posted on ZDNet UK.

Editorial standards