X
Tech

Upgrading your server: A look at the Itanium

For years the pundits scoffed. Waiting for Merced, as Intel's next-generation Itanium processor was code-named, was like waiting for Godot. First it was going to ship in 1998, then in 1999, then in 2000...until finally, Intel threw Itanium's belated release party last May.
Written by Eric Knorr, Contributor
For years the pundits scoffed. Waiting for Merced, as Intel's next-generation Itanium processor was code-named, was like waiting for Godot. First it was going to ship in 1998, then in 1999, then in 2000...until finally, Intel threw Itanium's belated release party last May.

The sneering subsided in June 2001, when Compaq made the extraordinary announcement that it would move its entire line of servers to Itanium by 2004 (and kill a strong competitor, Compaq's Alpha chip, in the same time frame). With somewhat less sweeping commitments, IBM, HP, Dell, NEC, Fujitsu, Unisys, and others have also jumped on the Itanium bandwagon.

Why such broad support? In part, simply because Intel is Intel. The market that Itanium addresses, high-end servers and workstations, has been dominated by low-volume, expensive chips that have the same brand as the box that surrounds them: Sun, IBM, HP, Compaq. With Intel's enormous fab capacity, the company can apply the same economies of scale to the high-end market that it brought to the desktop, driving down system prices and shortening enterprise sell cycles. Even more compelling, Itanium marks the biggest advance in CPU design since the 1980s.

But don't write a P.O. for a back officeful of Itanium boxes just yet. The chip may finally be shipping, but Itanium systems for the most part will remain unavailable until fall--and the price-performance of the first ones won't make history. Large enterprises, however, should strongly consider buying a few boxes for in-house developers, because the true benefits of Itanium emerge only when software is rewritten to exploit the new chip design. If developers get their feet wet now, then next year--when the chip should mature enough to make volume purchases sensible--you can be among the first to exploit the Itanium's watershed advancements.

Slow start, bright future
How could a chip so long in coming earn a place in today's high-powered servers and workstations--a market dominated by relentless competitor Sun Microsystems? The answer depends whether you're talking about the first Itanium chips or the Itanium architecture, which represents a major shift in processor design.

Real-world benefits
Buying an Itanium system to run today's desktop applications would be a dumb idea. Although Itanium was designed to be compatible with apps that run on Intel's Pentium and Xeon processors, the chip must shift into an emulation mode that slows performance to the point where, at the same clock speed, older and cheaper processors can run garden-variety productivity applications faster.


•  HP and Compaq: A train wreck in slow motion?
•  64-Bit Windows Server ready for Itanium, not for enterprises
•  Chip wars: Now it's best of three
•  MS server security patches: It's a trade-off
•  State of the server operating system wars
How could a chip so long in coming earn a place in today's high-powered servers and workstations--a market dominated by relentless competitor Sun Microsystems? The answer depends whether you're talking about the first Itanium chips or the Itanium architecture, which represents a major shift in processor design.

The first Itanium chips run at 733 and 800 MHz, clock speeds less than half that of Intel's fastest desktop processor, the 1.8-GHz Pentium 4. Due to manufacturing constraints, Intel acknowledges that Itanium won't break the 1-GHz barrier until next year, when a new version of Itanium, code named McKinley, will offer design tweaks along with faster clock speeds. Worse, even at the same clock, the first Itanium chips run much of today's software slower than Intel's Pentium or Xeon processors do. The bottom line? No one, not even Intel, expects enterprises to load up on Itanium boxes in 2001.

The Itanium architecture is another matter. To begin with, Itanium has a 64-bit design--double the 32-bits of previous Intel chips--enabling it to address a magnitude more memory and process data and instructions in larger chunks. More important, the Itanium introduces Explicitly Parallel Instruction set Computing (EPIC), a leap beyond current processor architectures--though the real performance benefits emerge only when software is recompiled to take advantage of it. Over 160 software companies now support Itanium to varying degrees, but a complete array of optimized applications and operating systems will have to wait until next year.

Meanwhile, enterprises that roll their own applications may want to buy a few Itanium systems as development platforms right now. Semico Research predicts that the shift to Itanium among high-end servers will be swift, reaching 70 percent by the end of 2003, roughly double Intel's current share. That estimate may be overly aggressive. But when the world's largest microprocessor company launches its biggest initiative ever, you'd better take a good, hard look at what's headed your way.

Itanium's EPCI challenge
The struggle for faster chip architectures boils down to a simple imperative: Process more stuff per clock cycle. Itanium borrows proven ideas and cooks up a few new ones in its effort to do more with every stroke of the engine. First, because Itanium is a 64-bit chip, it immediately joins the ranks of the fastest processors, including Sun's UltraSparc III, Compaq's Alpha 21264, IBM's Power3, and HP's PA-8000. And it boasts a vastly improved floating-point unit (FPU) that, in tests by the Standard Performance Evaluation Corporation (SPEC), demonstrates that even the first Itanium chips should match or beat the competition in numeric-intensive CAD and scientific applications. The Itanium's world-changing prospects, however, spring from its EPIC design.

Before Itanium, today's processor architectures fell into two camps. Reduced Instruction Set Computing (RISC), the favored chip architecture for graphics workstations and high-end servers, argues that the fastest way to compute is to execute brief software instructions (or "words") quickly--and in later model RISC processors, more than one at a time. Complex Instruction Set Computing (CISC), championed by Intel, uses longer words that are harder to execute in tandem but, in every clock cycle, tell a computer a little more than RISC instructions do.

Both RISC and CISC resist executing multiple instructions simultaneously. As these technologies have matured, they're bumping against their design limits, where incremental improvements yield smaller and smaller increases in performance. The EPIC architecture offers the best of both worlds with extra-long words explicitly designed to split into smaller words that can be processed in parallel by a new kind of chip.

EPIC demands that to benefit from explicit parallelism, software must be recompiled to create longer, divisible words--that is, the software compiler sets up parallel execution in advance. According to Linley Gwenapp, a veteran independent analyst, "Simply recompiling will provide good performance on EPIC, in most cases as good or better than on comparable RISC processors." Ultimately, however, Itanium's new EPIC architecture will redound on how developers write software. "Rewriting the source code to incorporate parallelism at the algorithm level will provide even greater performance gains," says Gwenapp.

After years of Intel evangelism, EPIC is already pushing the software community in a new direction. Meanwhile, the chip has other, more immediate attractions--support for huge system memory, a lightning-fast FPU, compatibility with legacy software, and so on. But the sea change in computing derives from EPIC, an architecture Intel says it will continue to refine over the next 25 years.

Buying an Itanium system to run today's desktop applications would be a dumb idea. Although Itanium was designed to be compatible with apps that run on Intel's Pentium and Xeon processors, the chip must shift into an emulation mode that slows performance to the point where, at the same clock speed, older and cheaper processors can run garden-variety productivity applications faster.

On the other hand, even the first Itanium chips shine in applications commonly shouldered by graphics workstations and enterprise-class servers. To decide whether to buy an early-model Itanium system for development purposes, consider the applications that the new architecture will benefit most in the near term--and, when recompiled or rewritten, should see the biggest performance gains from EPIC architecture.

Databases and business intelligence. A 64-bit processor like the Itanium can address billions of gigabytes of system memory, several magnitudes more than you could possibly cram into the most powerful computer today. In high-end servers, even the largest databases may be loaded from disk to memory for fast manipulation--so that heavy-duty tasks such as data mining and trend analysis can proceed much faster. And with four times the number of integer registers offered by most RISC processors, Itanium can process more on chip without time-consuming round trips to system memory. Recompiling database code for Itanium increases parallel execution, pushing performance to an even higher level.

Security and authentication. Security algorithms drain computing power--particularly when many users log on via secure connection at once. Just by virtue of being a 64-bit chip, Itanium helps break the bottleneck: The data blocks used by common encryption algorithms often measure 64 bits or wider, so Itanium can move and manipulate them in fewer steps. In addition, according to Intel, Itanium's 64-bit integer multiply speeds the execution of security algorithms by a factor of three to five. When developers recompile security software for Itanium, then the looping algorithms at the heart of encryption calculations can execute in parallel, yielding faster results.

Technical and scientific. Such numeric-intensive applications as CAD, CAE, and life sciences software get an immediate boost from Itanium's souped-up FPU. Before Itanium, Intel's FPUs were routinely trounced by those in RISC chips. But SPEC benchmarks show that--even running code that hasn't been recompiled--Itanium has pulled ahead. The processor's fused multiply/add operation forms an efficient core that maps with common algorithms used for technical and scientific purposes--and produces results with rounding errors smaller than those of other processors. Recompiling numeric-intensive applications enables floating-point instructions to operate in parallel, resulting in higher performance that may be dramatically evident in 3D graphics apps.

Component software architectures. The Web services trend--where developers vastly reduce integration overhead by wrapping .Net components or Enterprise Java Beans in standardized XML--promises to have a huge impact on enterprise computing. Unfortunately, distributed component architectures tend to result in "branchy" code, where dependencies among far-flung components may incur a big performance hit. But if developers recompile component-based applications for Itanium, branchy, data-dependent code can become an opportunity for increased parallelism.

Itanium should boost performance in other areas, too--supporting large memory for Web server caching, for example, or providing the FPU muscle for digital animation calculations. But the ultimate benefit of the new architecture will bear fruit only when the software community steps up to the plate. Sun excepted, every high-profile software company--from Microsoft, to IBM, to Oracle, to Red Hat--has announced its support. The signs are good enough for forward-looking enterprises to get a taste of the future now.

Eric Knorr is a veteran technology journalist and consultant based in San Francisco who also writes for ZDNet TechUpdate.

Editorial standards