Recently, Microsoft's problem with the Xbox's infamous Red Ring Of Death resulted in a billion-dollar bill. The consoles just died after a while; an issue that seemed to be linked to heat, but the company was reluctant to disclose exactly what.
Now we know — the graphics chip, designed in-house, chronically overheated and eventually gave up the ghost.
It can seem hard to believe that a company with so many resources can make such an expensive mistake. Yet in electronics design, there is no shortage of hidden problems that can elude every reasonable effort to find them before launch. Chip design is not the exact science you might imagine.
I've been there myself. Here's how it can go wrong. In the late 1980s, I worked for a small company with big ambitions. We started off by building a cheap PC network — non-standard, but built around a few low-cost off-the-shelf chips used in an ingenious way. That sold well enough that it was decided to make a higher-performance version around a custom chip design. Our hardware designer (and co-owner) was a very experienced, creative and effective engineer, one of the most capable people I know: the project seemed very doable.
The prototyping went well. At the time, chips were designed in four main stages. First, you design the actual circuit in a CAD package, which output a netlist — effectively a script that describes which logic gates to use and how they're connected. Then, you run the netlist through a software simulator that applies electrical rules as if the circuit were running: you feed it a file of fake signals and check the output against what you expect.
Because simulators are always very slow compared to hardware, you can only check a small subset of possible conditions before building a hardware prototype. This can be a collection of many, perhaps hundreds, of standard logic chips wired together by hand to mimic your design's internals: it's slow to build, hard to get it exactly right, and difficult to make multiple copies; let alone plug it into a PC.
Or you can take the fast and expensive path and go for an e-beam lithography prototype: this is a way of building a full custom chip by firing a carefully steered beam of electrons at a properly prepared bit of silicon. You feed your netlist into the e-beam process at one end and end up with a fully functioning (you hope) real working prototype, same size and speed as the final part.
These are far too expensive for production — e-beam is the equivalent of hand-lettering an illuminated manuscript, as opposed to the printing press of standard chip fabrication — but a great way of creating final test systems that work exactly as the finished design.
Our e-beam litho prototypes came back from the fab, we plugged them in, held our breath, turned on the PCs and loaded the software. There's absolutely nothing like that moment; months of work past and an entire future hangs on it.
It worked just fine. All we had to do then was send the netlist to a company that made proper Asics (Application Specific Integrated Circuits). These are made in large numbers very cheaply; they cost a lot more to set up than e-beam litho, but when that's done you can churn them out like so many sausages. We knew the circuit worked; the Asic was just another way to build something we'd now tested in many different ways.
And at first, all went to plan. The chips were made, the network boards produced, software finished (well, I say finished...), the product launched and we started to take the punters' money.
Then reports started to come in from the field that there was an uncommon but far too frequent failure mode where PCs locked up solid in mid-network transaction. We were still a small company with very limited resources: it doesn't matter how smart you are, once things start going wrong you can only do so much firefighting. But time is tight: it's at this point that you learn by heart the number of every local late-night fast food delivery service.
At first, we couldn't even replicate the problem; everything ran fine in the lab. It transpired after a while that certain kinds of PC were more vulnerable than others: we collected examples. The next problem was finding out a way of making the error happen repeatedly and often enough for us to investigate it. That took a while: our collection of Sancho's pizza boxes grew to mountainous proportions before we had a sequence of network transactions that could crash the bleeder on command. There didn't seem to be anything special about that sequence, but at least we could hook up our rather meagre collection of test equipment and start gathering real data.
It's worth remembering what the state of PC hardware was in the late 1980s, when the 8086 and 80286 ran the show and the 80386 was just coming onto the market. There were hundreds of different brands, many of them with custom motherboards, each trying with more or less success to emulate the IBM PC standard. Compatibility was a big issue: most (but by no means all) clones worked well out of the box. What happened when you plugged in an expansion card was a different matter.
The original IBM PC design was remarkable for a largely forgotten fact: hardware and software, it was open source. PC-DOS wasn't: that was Microsoft's. But a listing of the Bios and all the circuit diagrams were available...