AMD's chief scientist on smoke and mirrors

As AMD rolls out its 100-million transistor Athlon 64, chief scientist Bill Siegle talks to ZDNet UK about how mirror optics will help future processor plans
Written by Matt Loney, Contributor
Q:A decade ago you launched the K5 processor -- AMD's answer to the Pentium -- but ran aground on production problems that scuppered the chip's chances as a real competitor. Are you confident that we will not see a repeat of this with your 64-bit processors?
A: Obviously we have a finite amount of capacity. If we are totally wrong on our projections it could mean a shortfall – but that would be a happy problem for us because our projections don't fill the capacity in that fab (20, in Dresden, where the 64-bit chips are produced).

Frankly we will build capacity as the amount of software and applications grows. The Dresden operation has been three years in production and is working well. This is partly down the dedicated and efficient attitude of the workforce, but also down to our manufacturing, which has achieved high yield levels.

Can you say what those yield levels are?
We don't give out figures, but today we have mature yield levels on the 100-million transistor chip.

What is the next step in the manufacturing process?
We expect to be producing chips on the 90-nanometre production process in volume by the second half of 2004. We have been making prototypes for some time already. We will begin sampling at significant levels in the second quarter. Then 65nm will come in late 2005, with volume shipments expected in 2006.

As you shrink the die process, what will you use the extra space on the die for?
There are three important features that are new in the Athlon 64. First, the Level 2 cache which is up to 1MB –- we have not done that much cache before. Then there is the memory controller, and the HyperTransport links are significant for performance too.

As the die size shrinks, there are several possibilities. Large caches are one thing we are always looking at. Dual cores also become a possibility. At some point in the evolutionary cycle there comes a limit to how far you can improve performance with a single core on a chip. We will see dual core chips on the 90mn process.

Does that mean you will release a dual core processor by late 2005?

Will that mean much larger die sizes, compared to the 197mm squared of the Athlon 64?
No. In a dual core processor, the two cores will share the cache, so we're not talking about a 200-million transistor chip. Consequently, the die size will not double.

What are your predictions for power dissipation?
The Athlon 64 dissipates about 89 watts. Of course we'd like to see this figure go down. As we look ahead, power dissipation is a more pressing in the mobile space than on the desktop. The move to 90nm will help enable lower power dissipation and when that happens we will also make other design enhancements that will help reduce power dissipation in other ways.

Is the 89 watts of the Athlon 64 going to be an issue for notebooks?
We don't think so. (With the Athlon 64) We're providing a processor for desktop replacements. There are a lot of people who use a single notebook for working from various locations, but who do not need long battery life. That is the market we're targeting right now.

You are currently preparing to move to a manufacturing process that uses 300mm wafers. When do you see this happening, and what are the implications?
We have a roadmap in which we think we will see volume production in 2006. It is still an active topic of effort so I cannot be specific. This shift will be a productivity enhancer. I would expect yields to be as good or better at maturity. I would also expect to have the next enhancements to our APM technology by then. Right now we can deploy it across 25 wafers, but at that time we will be able to apply it on the wafer level.

Could you talk more about the challenges ahead as you move to larger wafers and smaller manufacturing processes?
For many years we have got away with simple scaling, making gates narrower and narrower and oxides thinner and thinner. We ask ourselves whether we can continue this and there are some serious reasons to say 'no that is not going to be so simple'. At our current thickness of 1.2nm we are dealing with three or four atomic layers of oxide material between gate and silicon. We are already seeing an appreciable amount of current leakage.

So instead of simple scaling there are many options under investigation today to try to chart a path for future devices that will take us through this decade.

Another problem is that we are now printing features that are smaller than the wavelength of the light used to print them. That becomes extreme in next couple of generations, and there are lots of innovations we require. I call it learning how to fool mother nature.

It's like the distortion you might feel in audio hi-fi system. We have a pattern that the designer has created, but we end up with distortions because the bandwidth of the system is not up to sharpness of the pattern. So what we do is put distortion into the starting signal that anticipates what will happen, and produces something that is a little closer to what we desire. This is a simple example of the methods that have to be employed. It will get worse as we move further into the sub-resolution technology.

We are also looking at Extreme Ultraviolet technology, which will takes us down to 13nm. This is a tough technology; it is called by some doing lithography with smoke and mirrors. Mirror optics are required because it cannot be shaped by conventional lenses. This has been in research and developer for six or seven years and probably will not see production until the end of the decade.

We also have our Advanced Mask Technology Centre in Dresden, with Infineon, and DuPont Photomasks. This started production in the middle of last year, and will open in the next few weeks. It is intended to support photomask development in the immediate future, but also support research and development for the long haul.

Editorial standards