With the announcement of its experimental 48-core Single-chip Cloud Computer, Intel has gone all Google.
It's not that the chip is designed along the same lines as Google builds its datacentres — although, intriguingly enough, it is — nor that Intel intends to become a near-monopoly supplier with the new idea. Intel's already there, and it has no intention of throwing away its current lead in the service of a new and untested idea.
The key advance that makes the Single-chip Cloud Computer (SCC) the hardware equivalent of a Google cloud is that Intel is effectively giving it away to its target audience — in this case, computer researchers. A chip of this complexity, made in such small numbers, would cost hundreds of thousands of dollars, if development costs were to be covered. Even if Intel is looking for a token payment from researchers (the company hasn't said yet), the money won't come close to payback. Like Google Wave, though, the value of SCC to the company comes in finding out how well it works in preparation for what comes next.
And like Google Wave, the SCC's value lies in creating a new way of doing things out of some very established ideas. Looked at from one angle, there is almost nothing new in there: the processing cores are very similar to early Pentiums, and the way they communicate is through message passing — an architectural concept almost as old as computing itself.
This is a huge advantage. As Intel knows only too well, there is absolutely no point in creating an enormously clever new design if its very cleverness prevents people from understanding it. Time and again, the company has produced radically different architectures in an attempt to move computing away from the bread-and-butter mainstream. Time and again, they've been ignored. Few today remember the i432 and the i960, and it is unlikely that the Itanium will ever be more than a footnote in computing history.
Sensibly, Intel has concentrated its efforts on making established ideas run fast and efficiently. The SCC has extraordinarily adaptable power management, which looks most out of place in a design intended primarily for architectural exploration: it's like building a prototype jet fighter and adding an HD in-flight entertainment system for the navigator. But it does make sense because Intel has this technology already from its Nehalem work and knows it must be a part of any future products. It also means it can put the chip into a standard format of motherboard and run it on standard power supplies in standard cases. That makes the practicalities of giving it away and persuading researchers to try it far more plausible.
Likewise, the message-passing core design must work exceptionally fast and with very low latency — attributes which 'just work' from a programmer's point of view, but take lots of silicon smarts. Incidentally, Intel has cheated a bit here by adding a tiny amount of shared memory that all the cores can access and that is is intended for out-of-band message synchronisation. This may go against one of the main advantages of message passing, which is that the architecture works efficiently off-chip compared to cache coherency. But that's the point of an experimental design, and one of the more interesting results will be how well this sort of acceleration measure works in reality.
Doubtless, there are plenty of other interesting things to learn about the SCC. Intel is keeping many details in reserve for the technical paper it will publish in February, although at the launch on Wednesday, the designers were talking tantalisingly about the extreme configurability of the cores.
The main question, though, is whether Google's approach to software — make it, give it away, capitalise on what evolution makes of the result — will work in hardware.
Tellingly, if Intel does find the SCC experiment worthwhile, there's a lot more it can do to develop the concept. The Braunschweig labs that had the lion's share of the development work on the SCC specialise in emulation, in creating hardware that can easily take on a myriad different configurations and pretend to be something else. Intel has already learned the value of that internally, where emulation has become a key design tool in coping with the complexities in creating, testing and verifying massive designs.
It makes sense to consider pushing that power out to the next stage, to the edge of market where new ideas meet practicality. Could Intel be thinking of creating a research platform that will accept many new architectures and let real people work with, and ultimately decide, which ones work? It's an enthralling idea, and one which opens up many strange new futures, where hardware takes on many of the attributes of software in a step beyond virtualisation.
But it demonstrates something that Google has always said, even if we haven't been listening: Google's big idea isn't about the web, or search, or mail, or services. It's about knowledge and availability, and finding new ways to give people what they want without getting bogged down in ways that just happened to work in the past. Intel is showing how that big idea could work a long way from web search.