Recently, Microsoft's problem with the Xbox's infamous Red Ring Of Death resulted in a billion-dollar bill. The consoles just died after a while; an issue that seemed to be linked to heat, but the company was reluctant to disclose exactly what.
Now we know — the graphics chip, designed in-house, chronically overheated and eventually gave up the ghost.
It can seem hard to believe that a company with so many resources can make such an expensive mistake. Yet in electronics design, there is no shortage of hidden problems that can elude every reasonable effort to find them before launch. Chip design is not the exact science you might imagine.
I've been there myself. Here's how it can go wrong. In the late 1980s, I worked for a small company with big ambitions. We started off by building a cheap PC network — non-standard, but built around a few low-cost off-the-shelf chips used in an ingenious way. That sold well enough that it was decided to make a higher-performance version around a custom chip design. Our hardware designer (and co-owner) was a very experienced, creative and effective engineer, one of the most capable people I know: the project seemed very doable.
The prototyping went well. At the time, chips were designed in four main stages. First, you design the actual circuit in a CAD package, which output a netlist — effectively a script that describes which logic gates to use and how they're connected. Then, you run the netlist through a software simulator that applies electrical rules as if the circuit were running: you feed it a file of fake signals and check the output against what you expect.
Because simulators are always very slow compared to hardware, you can only check a small subset of possible conditions before building a hardware prototype. This can be a collection of many, perhaps hundreds, of standard logic chips wired together by hand to mimic your design's internals: it's slow to build, hard to get it exactly right, and difficult to make multiple copies; let alone plug it into a PC.
Or you can take the fast and expensive path and go for an e-beam lithography prototype: this is a way of building a full custom chip by firing a carefully steered beam of electrons at a properly prepared bit of silicon. You feed your netlist into the e-beam process at one end and end up with a fully functioning (you hope) real working prototype, same size and speed as the final part.
These are far too expensive for production — e-beam is the equivalent of hand-lettering an illuminated manuscript, as opposed to the printing press of standard chip fabrication — but a great way of creating final test systems that work exactly as the finished design.
Our e-beam litho prototypes came back from the fab, we plugged them in, held our breath, turned on the PCs and loaded the software. There's absolutely nothing like that moment; months of work past and an entire future hangs on it.
It worked just fine. All we had to do then was send the netlist to a company that made proper Asics (Application Specific Integrated Circuits). These are made in large numbers very cheaply; they cost a lot more to set up than e-beam litho, but when that's done you can churn them out like so many sausages. We knew the circuit worked; the Asic was just another way to build something we'd now tested in many different ways.
And at first, all went to plan. The chips were made, the network boards produced, software finished (well, I say finished...), the product launched and we started to take the punters' money.
Then reports started to come in from the field that there was an uncommon but far too frequent failure mode where PCs locked up solid in mid-network transaction. We were still a small company with very limited resources: it doesn't matter how smart you are, once things start going wrong you can only do so much firefighting. But time is tight: it's at this point that you learn by heart the number of every local late-night fast food delivery service.
At first, we couldn't even replicate the problem; everything ran fine in the lab. It transpired after a while that certain kinds of PC were more vulnerable than others: we collected examples. The next problem was finding out a way of making the error happen repeatedly and often enough for us to investigate it. That took a while: our collection of Sancho's pizza boxes grew to mountainous proportions before we had a sequence of network transactions that could crash the bleeder on command. There didn't seem to be anything special about that sequence, but at least we could hook up our rather meagre collection of test equipment and start gathering real data.
It's worth remembering what the state of PC hardware was in the late 1980s, when the 8086 and 80286 ran the show and the 80386 was just coming onto the market. There were hundreds of different brands, many of them with custom motherboards, each trying with more or less success to emulate the IBM PC standard. Compatibility was a big issue: most (but by no means all) clones worked well out of the box. What happened when you plugged in an expansion card was a different matter.
The original IBM PC design was remarkable for a largely forgotten fact: hardware and software, it was open source. PC-DOS wasn't: that was Microsoft's. But a listing of the Bios and all the circuit diagrams were available...
...in the IBM PC Technical Manual. You couldn't just go and replicate them bit-for-bit, of course — IBM jealously guarded its copyright. But you could make your own with a high degree of confidence that they worked as described in the book.
One of the key parts of the equation was the expansion bus, the signals that fed interface cards such as the graphics adaptors, disk interfaces and network devices such as our own. That became known as the ISA — Industry Standard Architecture — and its expanded variant, the EISA bus. On the surface, this looked like a perfectly normal chunk of engineering: all the signals, their timings and voltage levels, were described with lots of nice clean graphs in the Technical Manual.
Unfortunately, that was the only place you'd see lots of nice clean graphs. Reality is far messier. Signals were late or early, voltages were never quite what you'd expect, and everything could change depending on what else was plugged into the bus alongside your bit. And, of course, all those hundreds of different makes of PC had different variations. If you designed something to the book, chances are it wouldn't work very well. Experienced designers know this, and are very conservative in what they expect. Our designer was certainly experienced, and had done a good job of the ISA interface part of the chip. Our e-beam litho prototypes worked perfectly well. It had to be something to do with the Asic.
In the end, after months of extreme pain, cost and pizza overdose, the problem was revealed in a chance conversation at a conference in a "Oh, we had that problem..." way. One of the abiding sins of the ISA bus — indeed, any bus that used the rather simple-minded signalling circuitry of the time — is called undershoot. If you rapidly change a signal at one end of the bus from five volts to zero volts, you would expect it to stop at zero all the way along the bus — it's just a bit of wire. But a combination of factors to do with basic physics means that further down the bus, some of the energy in that transition drives the voltage past zero, into negative numbers.
This is very bad news. The transistors in the logic circuits can lock up or even get permanently damaged by even a slight negative input. From time immemorial, chip designers have guarded against this by clamp diodes, fast-acting switches on the input lines that turn on as soon as a negative voltage appears and effectively short-circuits it to zero. Imagine a horde of Vikings rampaging towards a richly appointed town: the clamp diodes are trapdoors that open under the feet of the naughty Nordics and funnel them off to a big underground pit.
Normally, this just works. A negative spike appears, the clamp diodes turn on and the energy flows through them to ground. Nobody sees a thing. But on the Asics we were using, 'ground' wasn't quite as good as it should have been. If a really large undershoot happened on lots of input lines simultaneously — something that happened only when a certain data pattern appeared on the bus, and then only on particular designs — then the diodes turned on fine but the energy so diverted couldn't get back onto the main bus fast enough. The result was that the whole chip then went negative. The pits full of Vikings overfilled and burst up through the floors of the townsfolk.
It's an analogue quirk in a digital device, and not one that was specified in the design guide for the Asic — which, after all, was being driven way outside its nominal specification. We fixed it, if memory serves, by adding an extra bus interface chip between the Asic and the PC; this soaked up the undershoot without complaining and we moved onto the next design.
How could we have avoided this? As a small company, we couldn't afford the time or the money to go out and buy every make of PC before launch and go through the saturation testing which would have revealed the problem — but that's an issue that still plagues even the biggest outfits. We got our design right. The only thing that might have saved us was if we'd been far better plugged into the experiences of other companies working at the same problems; in those pre-internet days, that was by no means a simple job for a 10-person outfit in a converted warehouse in the East End of London.
As it was, we learned a great deal the hard way. It happens. It didn't cost us a billion dollars or earn us scalding headlines; we got off lighter than Microsoft's Red Ring Of Death. The complexity of modern IT, especially when you factor in millions of users and all their variations, is such that you can't know everything in advance. The world is not as it appears in technical manuals, marketing slides or engineers' heads — and all you can do is learn as much about it as you can before making the leap of adding to it.