Intel got on the phone on Monday to tell us chip-centric hacks about a bunch of papers the company was presenting to the 2008 Symposium on VLSI Technology in Hawaii. I've been trying to turn the notes from that briefing into a full news story, but on mature consideration it's more suitable to the informality of a blog post - when a company braindumps unconnected bits of technology without a real common theme, it doesn't fit well into the traditional channels.
First and most interesting is a new memory architecture - floating body cells, an evolution of dynamic RAM. FBC integrates a number of components from existing DRAM designs into a single specially designed transistor. “It uses a special very thin silicon on insulator technology which allows us to use a lower voltage than today”, Intel said, “it's more than a factor of 30 smaller than current cache memory at 45nm, and ten times better than anything anyone else has published. It's conceivable for 15nm and beyond.” - so that's around three years away. It gets some of its size advantage because it's basically a single transistor memory, compared to the six-transistor static cell used in processor cache, but there'll be additional complexity because dynamic memory is more difficult to control than static.
There were more disclosures about Nehalem, In particular, the first version of this new architecture will have four cores all with independent control of their voltage and clocks - and the ability to change those values very quickly and to a high degree of precision. In fact, says Intel, the cores can reconfigure every clock cycle - and since they're equipped with very good ways of checking their own performance, they can operate in the most efficient way possible without having to have lots of slack for safety. From an engineering point of view, this is a big step towards an ideal that's been mooted for ages - circuits that have a lot of knowledge about how they're working and a lot of intelligence of how to adjust that on the fly.
Another journalist (missed his name - apologies) brought up the very good question of whether this means that traditional speed ratings and performance metrics for chips are no longer appropriate, and whether Intel would be abandoning them. Intel's answer was intriguing: "We debated this quite a bit. When we talked to partners, people did not want that. A tremendous amount of innovation has gone into avoiding that. Internally, the chip is adapting, but it is deterministic from the outside."
The same theme - of chips deciding for themselves how best to operate in situ - came up over signalling and bus speeds. It's hard, expensive and inefficient to set up external tests during production to check the limits of how fast a bus can operate. Thus, Intel is building more and more tests into the chips themselves - transistors are cheap but performance isn't - to measure key parameters such as clock jitter which directly limit how fast a bus can reliably run.
All rather intriguing. For a long time, there's been a tacit assumption that Moore's Law is about making things smaller, therefore faster: the truth is that it's about making things smaller, full stop. When faster stops happening, smaller has to mean smarter - and there's only so much that companies like Intel can do without the software side of the industry getting smarter too. No more free rides.
More on that particular chain of thought later.