Welcome to the new ZDNet! Give feedback or learn more about our updated design here. Or, return to the classic view.

x86 facing real competition

Look at history from 1980 forward and what you see since is that the Wintel consortium evolved around the IBM PC to give us about 24 years of predictable change in which the dominant architecture became ever faster, but didn't really change in character. Look ahead, and you see all that changing: in effect it's 1979 all over again with two big competitors getting ready to duke it out and the current market leader, now Intel instead of Zilog, effectively sidelined for the duration.

Most of us have gotten used to the idea that the next generation Intel processor would be like the current one, except faster, but we're about to see more change in CPU architectures and markets than anything that's happened since Exxon took Zilog out of play in 1980.

In those days the big microprocessor companies were Zilog and Motorola. Intel was an also ran, shopping a downgraded 8086 -- known as the 8088 for its compatibility with older 8-bit peripherals -- to IBM for use in the PC. Today Zilog's z80C and its descendants appear in things like television remote controls, Intel's x86 defines the hardware basis for most of the desktop software in use around the world, and it's IBM, not Freescale Semi-Conductor (formerly Motorola's microprocessor division), that's driving PowerPC development.

Intel got its chance when IBM bought the 8088, but its market dominance since has been based on providing backward compatibility with Microsoft's software. Its current efforts, therefore, to follow AMD down the road to 64-bit and multi-core technologies run ahead of its core marketing strategy as Microsoft is forced to follow Linux and other Unix variants in supporting these technologies. That puts the cart ahead of the horse and one way to look at the Itanium's market failure is to see that the horse refused to play -- i.e., that Microsoft didn't go out of its way to create an installed software base for it. As a result, Intel doesn't seem to have an x86 replacement in the works and thus looks like a company on a collision course with an iceberg.

In contrast to Intel, IBM has two new CPU offerings in the pipeline, both ultimately derived from the Motorola Laboratories that Intel beat out in 1979 and 1980 to get the deal with IBM. For Microsoft's Xbox, IBM has apparently developed a triple core, 3.2Ghz, PowerPC derivative with a local short array processor and the ability to handle two threads per core. Officially, it's intended only for the gaming community, but there doesn't seem to be any reason to think you couldn't use it as a Windows 2000 desktop or server processor. In that role it has the bandwidth (22GB/sec) to be about three times faster than the top-end Xeons and the volume, courtesy of IBM and the Xbox, to cost rather less.

Unfortunately, we don't know (or at least I don't) what's in the contract between Microsoft and IBM on this thing and therefore can't reasonably predict what they'll do with it. One way or another, however, the Xbox processor appears to be an evolutionary step in the history of PowerPC and one that will have consequences for Intel's dominance among chip makers aiming to support Microsoft's software.

In contrast to the evolutionary approach taken for the Xbox, IBM's work in the Cell consortium with Sony and Toshiba is nothing short of revolutionary. Notice, however, that the term "Cell processor" is a misnomer: the cell patent ( US patent #6,526,491) is about managing interprocessor communications, not hardware. The machine we think of as the Cell processor is merely the "preferred embodiment" of those ideas.

When released late this year or early next, the first such "embodiments" are expected to come in four- and eight-way configurations where each chip contains a dispatch master, either four or eight "associated processors," and all the memory, disk, and network interconnects normally associated with a four- or eight-way grid. As a result, most of the inefficiencies associated with small grids just disappear, allowing some loads to run up to 50 times faster than on Xeon. Thus most open source applications for Linux should run on the 3.9Ghz four-way machine about as well as on Xeon if left unchanged, and about ten times faster if recoded to take advantage of the Cell's capabilities.

Sun has its own entry in the CPU race. Their idea, called throughput computing, puts the hardware required to support Solaris SMP on a chip. The first example of this, their Niagara1 machine, is now in laboratory and customer test, and runs 32 concurrent threads. On single processes, it's nowhere near as fast as Cell (or even Xeon), but it handles multiple workloads extraordinarily well and runs ten-year-old SPARC binaries unchanged -- and without significant performance penalties.

Right now Cell and Throughput computing represent very different approaches and are most applicable to equally different workloads: Cell to work that's floating point intensive and Niagara to high volume character pushing. For example, a deskside computer using 16 eight-way Cells should handily outperform a supercomputer consisting of 1,600 Dell Xeons on most code. Similarly, a rack containing a single CPU Niagara2 (with on-board TCP/IP) should outperform a rack of 16 Dell Xeons on tasks like web services.

Look at history from 1980 forward and what you see since is that the Wintel consortium evolved around the IBM PC to give us about 24 years of predictable change in which the dominant architecture became ever faster, but didn't really change in character. Look ahead, and you see all that changing: in effect it's 1979 all over again, with two big competitors getting ready to duke it out and the current market leader, now Intel instead of Zilog, effectively sidelined for the duration.

Newsletters

You have been successfully signed up. To sign up for more newsletters or to manage your account, visit the Newsletter Subscription Center.
Subscription failed.
See All