Drilling into dual core

Dual Core Special Report Part 1: The development of multi-core chips will have repercussions on hardware design, performance and most importantly, software licensing

The microprocessor industry is going through a major technological shift that will have big implications for PC and server makers, software vendors and, not least, those buying the technology. The transition will see two or more processor cores placed on a single die, a strategy first put into practice by mass-market chipmakers this spring; by the end of next year, nearly all the chips sold by Intel and AMD will be dual-core.

Hardware makers are taking pains to reassure customers that the shift to dual-core is nothing out of the ordinary — just the latest design technique for keeping processor performance on the rise. But dual-core and multicore chips are leading to wider changes in the way hardware is designed, what performance users can expect from it and the way software is licensed.

Dual-core is part of a wider industry trend away from reliance on processor clock speed as the main driver of performance. High-end Unix servers moved away from clock speed years ago, shifting to more efficient RISC processors and, starting in 2000, dual-core chips. In 2001 AMD jumped on the bandwagon, arguing that ratcheting up chip frequency — known as frequency scaling — was offering diminishing returns.

AMD began using a more efficient chip design and stopped using clock speed in its marketing materials, instead offering model numbers that gave a performance estimate. Following this path led AMD to offer its 64-bit extended instruction set, and, in April of this year, its first dual-core Opterons for servers and workstations. Dual-core Athlon desktop chips arrived in May.

Intel stuck with frequency scaling much longer, producing chips that ran at higher and higher frequencies, sucked up more power and produced more heat. Finally, however, Intel was forced to admit that a more efficient approach was needed. The company tore up its roadmap, killing a planned chip called Tejas, and coming up with a new strategy that had dual-core as its centre-piece.

"Intel is going multi-core because it needed a competitive response to the AMD strategy," says Peter Glaskowsky, an architect with Silicon Valley microprocessor start-up MemoryLogix and analyst for Envisioneering. "Tejas was so big, complex and hot that Intel had to kill it."

Intel's first dual-core chip, the Pentium D, came out in April for desktops and July for low-end, single-processor servers. The company now has 17 multicore projects in development, and says more than 85 percent of its chip sales will be multi-core by the end of next year. (AMD's estimate is slightly higher, at 90 percent.) The first dual-core Itaniums are to appear by the end of the year, as are the first dual-core Xeons, originally planned for 2006.

On top of that, Intel is rolling out a new architecture designed to replace the NetBurst system behind the Pentium 4, and to unify its laptop, desktop and server lines. The new architecture and its surrounding platforms will be based on multicore chips and inspired by the integrated approach of Centrino.

At first glance, the benefits of multicore seem straightforward. The new chips are designed to allow seamless upgrades from single-core processors, with just a BIOS update. The design means each chip can run at a lower frequency, so that heat and power requirements stay the same — but with a surprising jump in performance. In one case, a dual-core Opteron system showed a performance improvement of more than 80 percent for a SAP application environment, according to HP, though most affected applications will see gains more like 40-50 percent.

Customers have paid attention, and are already lining up to buy dual-core systems. Since AMD was first off the block, it has benefited most so far, taking more than 11 percent of x86 server shipments in the second quarter of 2005, up from about 7 percent in the first quarter, says Mercury Research. AMD has never before passed the 10 percent mark for server shipments.

New, smaller manufacturing processes made dual-core chips practical about five years ago, but power management has only more recently become a major issue for enterprises. That's partly because of the rise of dense server installations, particularly blade servers. "These dense technologies we had developed actually created a different issue for us, which was how do you power these, how do you cool them, how do you ensure these systems operate effectively?" says Phil McLean, HP's UK server product manager.

Hardware makers are now marketing servers based on a performance-per-watt measurement. Dell says its low-end PowerEdge 850 server, with a dual-core Pentium D, offers a 43 percent performance gain per watt, and HP has similar figures for its ProLiant DL585, a four-way, dual-core Opteron server.

On the desktop, the main benefit of the multi-core could be smaller, quieter systems along the lines of the Mac mini; indeed, a dual-core mobile chip from Intel is likely to power future versions of the Mac mini. Some analysts have gone so far as to predict the end of the tower unit, with its noisy cooling systems. Operations like media encoding will particularly benefit from dual-core.

The new dual-core world isn't without its hitches. For one thing, not all software benefits equally from the presence of two cores, a fact companies should keep in mind when buying new systems. Software needs to be able to take advantage of multiprocessing and multithreading to fully benefit. "Enterprise platforms are highly threaded today and, as such, we expect that they will be able to take great advantage of the compute capability provided with dual-core and multicore," says Jeff Austin of Intel's product marketing group.

Transaction processing, databases, scientific applications and media encoding are among the types of software that should show significant performance gains. Desktop productivity software won't really benefit, since it doesn't need much performance in the first place. Some classes of software, like gaming, will actually take a performance hit, according to chipmaker benchmarks.

The initial dual-core designs, particularly Intel's, will need significantly more development before they can take full advantage of the efficiencies of dual-core. "Intel cobbled together its dual-core plan at the last minute. That's why Intel's dual core-chips so far are just two independent chips that haven't been cut apart," says Glaskowsky. "That approach leads to large amounts of redundant logic."

Because current Intel and AMD chips are designed to fit into a single existing socket, both cores are connected to the same power supply, Glaskowsky pointed out. That means a single core can't be put to sleep — thus conserving power — unless both cores are idle. Dual-core server chips on the way from Intel and AMD will continue to use two separate memory caches for each core, a less efficient approach, but one that simplifies the design work.

It will be some time before chipmakers have fully adjusted to a multicore world, Glaskowsky says: "In the long run both AMD and Intel will provide more sophisticated multi-core designs that will solve all of these problems, but it will take a few years to get there."

Hardware aside, the new chips are leading software vendors to re-evaluate the way they license their products, which could mean sudden increases in some companies' software costs. Hardware makers don't agree, but such increases may be justified, considering that buyers are practically getting two processors for the price of one.

It's a mistake to stick to old, obsolete ways of thinking about hardware performance, Glaskowsky believes. "Multicore technology may finally force users and software vendors to look at overall performance, not specific implementation details," he says. "They adapted to multiprocessing years ago, but more recently, basically ignored multithreading. AMD and Intel would like everyone to ignore multicore technology, but that won't happen."