X
Tech

Taking chips to 10GHz ... and beyond

Want a PC that's 100 times more powerful than a 1,000MHz desktop? Then meet the army of microprocessor engineers hell bent on breaking the 10GHz barrier
Written by John G.Spooner, Contributor

Imagine if your home PC had as much giga-happy grunt as a mainframe. A desktop that's 100 times more powerful than a 1,000MHz PC, operates as your personal server, networks all your electronic appliances and responds to your voice commands.

Sound like Star Trek? Maybe, but to high-tech's leading microprocessor gurus it sounds more like 2011.

According to a cross section of industry experts polled by ZDNet News US, 2011 is the year to mark on your PDA because that's when chips are predicted to hit the 10GHz barrier. That giga-count is the equivalent of 10,000MHz in megahertz-speak.

A horde of microprocessor design engineers are currently laboring in the labs of IBM, Intel, AMD, Motorola and elsewhere, striving to make 10GHz processors a reality. These chips won't just have high clock speeds. They will include other tricks of the trade, such as architectures that can increase parallelism, or the ability to process multiple processor instructions per clock cycle, and better access to larger on-chip data stores, known as caches.

Fred Pollack, an Intel fellow and director of Intel's Microprocessor Research Lab, estimates that a 10GHz processor will be roughly 100 times more powerful than a 1GHz Pentium III or Athlon chip (both chips are due out in the second half of this year). Pollack reached that mind-boggling figure by multiplying the predicted tenfold increase in clock speed (from 1GHz to 10GHz) by a predicted tenfold increase in performance (achieved via such new design techniques as increasing parallelism).

And what will PC users do with all those extra gigahertz? The engineers working for the competing chip manufacturers may not agree on exactly how they're going to reach the 10GHz mountaintop, but they all agree that it will open up a panorama of new applications, including speech recognition-based computer interfaces and much better multimedia.

Pollack says greater clock speeds will make PCs easier to use -- giving them more humanlike interfaces, such as speech recognition capabilities, that take "a tremendous amount of PC performance" to deliver.

Another scenario Pollack presents is the 10GHz home PC. Tucked away in a closet, it works like a server, connecting to the Internet via a broadband connection, then distributing audio and video data, along with email, to any number of PC terminals and appliances throughout the house via a wireless connection.

IBM sees things as more network-centric. Through advances in its PowerPC technology, Big Blue envisages people carrying very powerful handheld devices that simultaneously act as a cellular phone, a Net-connected wireless device with e-mail and Web browsing, and a digital wallet.

"What (10GHz) means is better network performance ... (through) extremely high-speed servers that process a huge amount of data," says Bijan Davari, vice president of IBM's Semiconductor Research and Development Center in Fishkill, New York.

Piggybacking on that very fast network will be a new generation of handheld appliances, Davari predicts.

"One device will connect you wirelessly to the network," he says. "This device will combine a cellular phone and personal digital assistant ... with speech recognition. It knows who you are. It communicates your identity to the network."

And how are chip engineers going to achieve 10GHz? It's all a matter of building better "roads," according to Mark Bohr, Intel fellow and the company's director of process architecture and integration.

"A microprocessor is like a city, where you have places of business (transistors) connected by roads (interconnects), and the data are like automobiles," he explains. When it comes to designing higher performance chips, "roads have to be wide enough to accommodate (greater amounts of) traffic and enable higher speeds."

There are several design techniques that can be used to increase performance -- shrinking the dimensions, using different materials in manufacturing, integrating new functions and developing circuit designs that can exploit these new materials, or increasing levels of integration.

When it comes to designing new processors, though, the most straightforward way to increase performance and lower cost is to shrink the size of the chip itself. That means the internals of a chip also have to be miniaturised to fit more transistors into a smaller space. In other words, shrinking the distance between transistors, a distance measured in microns, or one-millionth of a meter, is like shortening a road -- it makes for a much faster trip.

Shrinking a chip is one way chip engineers increase clock speed, but it's a task that becomes more difficult each time. However, there's more than one way to speed a chip.

"When the barriers come, you bend the rules. That's really the trick," Davari says. "If you can't expand sideways, you go up. Just like in Manhattan."

One way to bend the rules is to utilize new materials inside the processor. Many new materials will be used in the coming years. One of the first and most popular will be copper. IBM has already begun to use copper, rather than aluminum, to bridge the gap -- known as an interconnect -- between transistors.

"With copper, you can shrink generations more easily than with aluminum," Davari says. This opens up a new dimension, which will allow IBM to move from the current 0.18-micron process to 0.07.

AMD will use copper this year in its Athlon processor. And Intel plans to make the switch in 2001. Why the shift? Copper offers lower resistance, which lets it keep up with the speed at which transistors turn on and off inside a chip. Forever looking for a performance edge, though, IBM and other chip makers are also working on alternative materials such as silicon on insulator, silicon germanium and low K dialectics. Each is a different workaround to an existing problem.

Another way to create faster chips is to integrate new features into them.

IBM, for example, is working on ways to embed large amounts of fast, dense DRAM (dynamic RAM) memory into multigigahertz processors. Having large amounts of readily available data stored close by in memory will help performance by ensuring that the chip does not have to wait while that data is retrieved from memory, which takes a long time in terms of processor clock cycles. IBM will demonstrate an example of that DRAM approach this week at a conference, showing off a 0.18-micron PowerPC processor embedded with a DRAM cache that it says will be able to match the speed of a 1GHz processor.

Techniques such as this "let the processor breathe," IBM's Divari said, meaning that they allow enough data into the processor to keep up with its high clock speed.

"We've actually built transistors down to 0.05 microns in our experimental facilities, so we actually know how to build these devices," says Russell Lang, director of silicon technology strategy at IBM's Microelectronics Division. "You can see five to seven years into the future. After that it gets fuzzy."

Lang says IBM has a road map that goes as far out as 2010. Intel, meanwhile, has made predictions out to 2011, where it expects chips to have 1 billion transistors and run at about 10GHz, according to a paper authored by Albert Yu, senior vice president and general manager of Intel's Microprocessor Products Group.

"That's the fun of the job. You get to think about the future, five or 10 years down the road," says Davari.

Lang says, "You could say we're running up against the limits of (processor design) just about now. We tend to underestimate the creativity of the research and development community working on this worldwide."

In that vein, IBM's staff of some 2,000 engineers expects make a number of breakthroughs along the way.

Moments of discovery happen often in East Fishkill, Davari said, "because there are so many fronts we are working on."

What do you think? Tell the Mailroom. And read what others have said.

Editorial standards