X
Tech

Chip designers hitting the wall?

Chip designers have been faithfully meeting Gordon Moore's theory that the number of chip transisters will double every two years--but we may soon need nuclear power plants to fuel CPUs.
Written by Michael Kanellos, Contributor
For designers working on the next generation of microprocessors, Moore's Law is becoming Moore's Curse.

Moore's Law--the observation by Intel co-founder Gordon Moore that the number of transistors on a chip will approximately double every 18 to 24 months--has been the bedrock of the high-tech industry for years, as the phenomenon has given chip designers a way to progressively incorporate new features on silicon.

Chip designers, however, are now butting up against the laws of physics. Ten years from now, chips will run at 30GHz and churn a trillion operations per second. Unfortunately, with today's design technologies, that would lead to chips putting out the same amount of heat, proportionally, as a nuclear power plant.

"Power is a wall," said Jason Ziller, director of the microprocessor research labs at Intel. "We're entering an era of constraints on power. There will be a shift in design. For many years, the industry was almost totally focused on hertz, megahertz and gigahertz, and delivering the highest number."

Limiting power in these devices won't be easy, and the minuscule size of transistors will make manufacturing them arduous. To continue on the historical trajectory, a number of changes will take place: These will involve dual-core chips, micro-power sources that can deliver small jolts of electricity to the undersides of processors, and new chemical compounds to replace or enhance silicon.

Manufacturing revolution
The chief problem facing designers comes down to size. Moore's Law works largely through shrinking transistors, the circuits that carry electrical signals. By shrinking transistors, designers can squeeze more transistors into a chip.

More transistors, however, means more electricity, high-speed signals and heat compressed into an even smaller space. In addition, smaller chips run at faster speeds, increasing performance but compounding the complexities.

So how do you defuse these conflicts? Through experimentation, some say.

Using optical fiber and tiny lasers--rather than copper or aluminum wires--to connect elements on motherboards and eventually inside processors themselves could prove a huge boon, as fiber uses less power but provides better performance. It's expensive now, but that will change over time.

"We will be pushed to develop optical interconnects," said Bill Pohlman, CEO of Primarion, a chip design company. "It is likely that the quality of (microprocessor designs) will be rated by how they perform within a given power level."

Nanotubes and clocks
IBM, meanwhile, advocates that carbon nanotubes replace transistors in 10 years or so, scientists at the company have said. Either way, copper wires, which recently replaced aluminum in chips, won't enjoy a long life, predicted Dave Epstein, a venture capitalist with Crosslink Capital.

Other designers are looking at using asynchronous clocks, which reduce the number of computing tasks by not requiring different functions to be performed at the same time, said Dave Tuttle, senior director of the Austin Design Center for Sun Microsystems.

The shrinking size of processors will also present challenges in drawing ever-smaller circuits. The wavelength of light, which is used to "draw" transistors through lithography, measures 250 nanometers, or roughly twice the size of the average features on today's 130-nanometer chips and five times as big as some of the features on these chips. To get around that problem, Numerical Technologies licenses technology that creates interference patterns that appropriately reduce the beam that hits the wafer.

"You create images and shadows and very fine geometry," said Stan Mazor, director of customer services at Numerical and one of the three engineers behind the first microprocessor, Intel's 4004.

Extreme ultraviolet lithography (EUV), a chipmaking process expected to come into use around 2005, will allow designers to develop chips with average features measuring 70 nanometers. Although EUV uses a much finer light source, Numerical's technology will still be in use because "they aren't going to change their equipment overnight," Mazor said.

New chip packages, the housing that connects the chip to the rest of the computer, will use denser arrays of wires to match these smaller, more refined chips, said Koushik Banerjee, the technical adviser for the Assembly Technology Development at Intel. In addition, these packages will include an array of minute power supplies capable of delivering jolts of energy to various places on the chip at once. Currently, chips tend to get energy from central power sources.

"What kills processor performance is delays and interconnects," Banerjee said.

Servers take the lead
IBM scored one of the major breakthroughs in chip design this year with its Power 4 chip, which combines two processors on a single piece of silicon. By combining two chips in this way, performance increases because the two chips can communicate at a much faster rate and share resources. At the same time, power consumption goes down because the electrical pathway between chips has been dramatically shortened.

IBM also has designed its server in such a way that four of the dual processors fit snugly in a single module to boost performance even further in multiprocessor servers.

A multiprocessor setup "contains hundreds of chips and many miles of crisscrossing wires," said Bradley McCredie, distinguished engineer at IBM's server group.

Similarly, Hewlett-Packard will release the PA-RISC 8800, or "Mako," processor--its first dual-core chip--in 2003. The chip combines two PA-RISC 8700 processors running at 1GHz. Results of benchmark tests from HP indicate that each of the cores will run as well as Intel's upcoming McKinley processor, noted Nathan Brookwood, an analyst at Insight 64.

Intel and Sun are also very interested in multi-core chips but say such designs for now are uneconomical.

"If (IBM) doesn't work on reducing the cost, it will remain a mainframe-level niche product," said David Yen, vice president and general manager of Sun's processor product group.

Meanwhile, Intel won't come out with a multi-core design on the 130-nanometer manufacturing process, according to Dileep Bhandarkar, director of the enterprise architecture lab at the company. As a result, the earliest dual-core chip from Intel won't emerge until after the company starts manufacturing chips on the 95-nanonemeter process in 2003.

Similar in principle to dual-core processors is multi-threading, a technique that allows a chip to handle parts of two separate applications at once.

Hyper-threading, Intel's version of multi-threading, saves energy by increasing performance by 18 percent to 30 percent, depending on the application, said Glenn Hinton, an Intel fellow. The wires for hyper-threading, which will be used in a Xeon processor in the first quarter, take up less than 5 percent of the chip's space.

"For a very small investment in die size, we get a substantial increase in performance," Hinton said. "To add performance now, you have to add transistors and add cache."

Making better connections
Nearly every company is also looking at improving chip-to-chip connections. One of the key features of AMD' s upcoming Hammer processor is that the chips inside servers will communicate through HyperTransport links, a technology from AMD, rather than through the oft-crowded common thoroughfare of the chipset, according to AMD's chief technical officer, Fred Weber.

Although samples of the chip will not come out until next year, Hammer will be able to trounce other server chips on common benchmarks. "I'd like to think of this as the result of good plumbing," Weber said.

Hammer, like other upcoming server chips, also relies on improved speculation, or the science of allowing the chip to figure out its next task before it arrives. Notebook issues
By contrast, notebook designers face slightly different issues. As with servers and desktops, reducing heat remains a big problem. But notebook designers also have to worry about stray electric current.

"Leakage of current is a big problem for notebooks," said Kevin Krewell, an analyst at the Microprocessor Report. "Slowly but surely it kills your battery life."

To this end, designers are experimenting with integration--putting more functions on a single processor--as well as with improved power-management techniques. Transmeta's Crusoe chip, for instance, automatically shuts down various subsections when not in use. The Crusoe 6000, coming next year, will also include a fused graphics chip, according to CTO David Ditzel.

Banias, a notebook chip coming from Intel in 2003, will perform a similar function. When it arrives, Banias will provide a 25 percent performance boost over whichever Intel notebook chips are current then, but it will improve battery life by 25 percent, said Paul Otellini, general manager of the Intel Architecture Group.

Read ""="">The chip revolution turns 30.

Editorial standards