In the same way that a pillow filled with pebbles is harder to get smooth than one stuffed with sand, chip makers are finding it harder to make transistors behave predictably as they shrink. Here, the stuffing isn't actually lumpier, it's that the pillowcase has got so small that the grains of sand look like pebbles.
And uneven transistors are disasterous. A circuit can only go as fast as its slowest component, and when you're dealing with hundreds of millions of transistors the bottom end of the variability curve is going to have a substantial population. There are tons of things that can vary, too, across speed, temperature, voltage, current and time, and at the level of engineering within a contemporary high performance circuit, there's very little room for imprecision.
Thus, variability leads to low yield - you end up making chips that don't work and can't be sold.
Which is why a recent announcement (pdf) from the University of Southampton is so interesting. Drs Peter Wilson and Reuben Wilcock of that ilk have come up with the CAT - Configurable Analogue Transistor - which is a a complex beastie hiding a simple idea. It's a bit like a an aircraft wing with extensible flaps - at take-off and landing, when you need more lift at lower speed and don't mind (or actually want) drag, you stick the flaps out. When you're actually flying and want low drag at high speeds, you tuck the flaps in and off you go.
At heart, the CAT is a set of exponentially smaller transistor parts that can be switched in various combinations in parallel across the main transistor. Once you've built your circuit, you test how it works and, if you need to, configure the right combination of extra bits to add to the problematical device to tune the performance so it works in the design. A bonus is that it's then possible to adjust for performance change over the lifetime of the device.
The researchers point out that you don't need to do this to every transistor in the design - part of the trick is identifying which ones are most at risk of affecting yield and concentrating on those, and you won't find there are that many. There are lots of other sensible caveats too, about layout and context - and of course, this is an analogue technique perhaps best suited for tuning transistors which have to operate in the linear part of their performance curves - transistors in digital circuits spend their lives on and off.
This isn't the first technique for adjusting the performance of parts after they've been manufactured - laser trimming, which involves zapping parts of a component with a death ray - has been around for a long time and still sounds more science fiction than a CAT. But this does illustrate a trend that I feel will become more and more important: self-adjusting circuits that don't assume their components are stable or reliable, but actively reconfigure them to operate in their optimal mode.
To return to the aviation analogy - it's like fly-by-wire fighters, where the machine itself looks after the donkey work better than any human can, making it possible to take fundamentally unstable designs and use that instability for phenomenal performance. You can even see a variant of that idea in Google's architecture, where it uses multitudes of cheap low-reliability hard disks and servers and expects the software around them to manage the results.
Expect these ideas to become more and more important as we get closer and closer to areas of physics and engineering where the statistics go against us. Exactly what the implications of this will be - well, we'll have to find out as we go along.