X
Innovation

Intel and the giant chip

You can see Intel's giant chip as a cry for help. We need software that can make parallel computing more efficient while supporting present operating systems.
Written by Dana Blankenhorn, Inactive

My house keys still hang on a tschotske Intel gave me almost 20 years ago.

It's a two-sided plastic rectangle. One side holds the "old" Intel 386 chip, the other the "new" 486 chip.

These were easy to create, because you always find bad chips in any large batch. Turning them into keychains is better than tossing them in the trash.

What was always most interesting to me was that the 486 chip was several times larger than its predecessor. I learned that this is one way companies like Intel can show progress when they're fighting Moore's Law to get circuit lines closer together. Make a bigger chip.

Yesterday Intel announced a fairly ginormous chip, which it calls the SCC. It stands for "Single-chip Cloud Computer"  and it's mid-way between a deliverable and a DeLorean.

They're talking about shipping hundreds of these chips next year, mostly to researchers. The version they showed installed in a PC had a lot of fans, meaning this chip runs too hot for general use.

But it's an important marker of where computing is headed, and what is needed to get us there.

The SCC consists of 24 "dual-core" Atom chips, 48 processors in all. The Atom is the same chip that goes into a little Netbook. This chip pushes the multi-core concept way out there, and Intel wants to use that to get software writers thinking about how to support the concept.

Because it's based on the Atom, the SCC runs Windows or Linux. But because it's actually 48 chips on a single piece of silicon, Intel needs software that can deliver more of this increased power to output than is presently available.

Chips being shipped into the general market next year have six or eight cores, not 48. The SCC is pushing present chip architectures to their limit. For 2011 chip lines will slim down from 45 to 35 nanometers, however, making this type of design more practical.

The whole "multi-core" idea is the on-chip expression of parallel computing, a 20-year old concept that has given us distributed computing and the SETI @Home project.

When computing is distributed using hundreds of thousands of home PCs, the CPU "cost" of carving up a big job into small bits, then reassembling the bits, is made up on volume.

To make this work on a single chip takes better software.

So in some ways you can see Intel's giant chip as a cry for help. We need software that can make parallel computing more efficient while supporting present operating systems.

Intel is selling the sizzle of a "single chip cloud" but reality is far cloudier, and Intel needs imagination for this story to have a happy ending.

This post was originally published on Smartplanet.com

Editorial standards