Intel goes Google with cloud chip

Intel goes Google with cloud chip

Summary: The key innovation in Intel's experimental 48-core chip isn't in the silicon at all. It's what the company's doing with it

TOPICS: Emerging Tech

With the announcement of its experimental 48-core Single-chip Cloud Computer, Intel has gone all Google.

It's not that the chip is designed along the same lines as Google builds its datacentres — although, intriguingly enough, it is — nor that Intel intends to become a near-monopoly supplier with the new idea. Intel's already there, and it has no intention of throwing away its current lead in the service of a new and untested idea.

The key advance that makes the Single-chip Cloud Computer (SCC) the hardware equivalent of a Google cloud is that Intel is effectively giving it away to its target audience — in this case, computer researchers. A chip of this complexity, made in such small numbers, would cost hundreds of thousands of dollars, if development costs were to be covered. Even if Intel is looking for a token payment from researchers (the company hasn't said yet), the money won't come close to payback. Like Google Wave, though, the value of SCC to the company comes in finding out how well it works in preparation for what comes next.

And like Google Wave, the SCC's value lies in creating a new way of doing things out of some very established ideas. Looked at from one angle, there is almost nothing new in there: the processing cores are very similar to early Pentiums, and the way they communicate is through message passing — an architectural concept almost as old as computing itself.

This is a huge advantage. As Intel knows only too well, there is absolutely no point in creating an enormously clever new design if its very cleverness prevents people from understanding it. Time and again, the company has produced radically different architectures in an attempt to move computing away from the bread-and-butter mainstream. Time and again, they've been ignored. Few today remember the i432 and the i960, and it is unlikely that the Itanium will ever be more than a footnote in computing history.

Sensibly, Intel has concentrated its efforts on making established ideas run fast and efficiently. The SCC has extraordinarily adaptable power management, which looks most out of place in a design intended primarily for architectural exploration: it's like building a prototype jet fighter and adding an HD in-flight entertainment system for the navigator. But it does make sense because Intel has this technology already from its Nehalem work and knows it must be a part of any future products. It also means it can put the chip into a standard format of motherboard and run it on standard power supplies in standard cases. That makes the practicalities of giving it away and persuading researchers to try it far more plausible.

Likewise, the message-passing core design must work exceptionally fast and with very low latency — attributes which 'just work' from a programmer's point of view, but take lots of silicon smarts. Incidentally, Intel has cheated a bit here by adding a tiny amount of shared memory that all the cores can access and that is is intended for out-of-band message synchronisation. This may go against one of the main advantages of message passing, which is that the architecture works efficiently off-chip compared to cache coherency. But that's the point of an experimental design, and one of the more interesting results will be how well this sort of acceleration measure works in reality.

Doubtless, there are plenty of other interesting things to learn about the SCC. Intel is keeping many details in reserve for the technical paper it will publish in February, although at the launch on Wednesday, the designers were talking tantalisingly about the extreme configurability of the cores.

The main question, though, is whether Google's approach to software — make it, give it away, capitalise on what evolution makes of the result — will work in hardware.

Tellingly, if Intel does find the SCC experiment worthwhile, there's a lot more it can do to develop the concept. The Braunschweig labs that had the lion's share of the development work on the SCC specialise in emulation, in creating hardware that can easily take on a myriad different configurations and pretend to be something else. Intel has already learned the value of that internally, where emulation has become a key design tool in coping with the complexities in creating, testing and verifying massive designs.

It makes sense to consider pushing that power out to the next stage, to the edge of market where new ideas meet practicality. Could Intel be thinking of creating a research platform that will accept many new architectures and let real people work with, and ultimately decide, which ones work? It's an enthralling idea, and one which opens up many strange new futures, where hardware takes on many of the attributes of software in a step beyond virtualisation.

But it demonstrates something that Google has always said, even if we haven't been listening: Google's big idea isn't about the web, or search, or mail, or services. It's about knowledge and availability, and finding new ways to give people what they want without getting bogged down in ways that just happened to work in the past. Intel is showing how that big idea could work a long way from web search.

Topic: Emerging Tech

Rupert Goodwins

About Rupert Goodwins

Rupert started off as a nerdy lad expecting to be an electronics engineer, but having tried it for a while discovered that journalism was more fun. He ended up on PC Magazine in the early '90s, before that evolved into ZDNet UK - and Rupert evolved with them into an online journalist.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • But...

    People don't want a specific chip for a specific job they want cpu that gives them flexibility and choice, when they choose to partake in a multitude of different activities.
  • What Intel is doing is like Google's Methodology

    Google has huge monstrous server farms built out of the same hardware repeated thousands of times. The only thing different is what software queries are running on most of them, or if the machine is running in a management mode or query mode.

    What a lot of people forget is that one of the components in Intel Pentiums and presumably later models P4s and such is a microcode cache. The microcode can be programmed on the fly and could be used for "customized-as-needed" operations. So although the hardware is generic, special functions could be inserted into the CPU when needed for special functions.

    Each of the cores would be built exactly in the same way but the micro-code caches could be programmed as needed when the task running requires it.

    Its a powerful idea. It means that part of the cores could be doing graphics rendering for the system while another few cores are doing the ballistics calculations necessary to track a virtual weapon in a game for instance. or tracking a real weapon in a defensive system. Meanwhile a couple of cores are handling communications and IO for the entire system.

    Turn the power off and back on again and download another program into it and it becomes a diesel train locomotive controller or a hybrid automobile system. That is the power of the Intel idea.