X
Tech

Intel: Montecito or bust?

IDF: The general manager of Intel's Enterprise Platform Group talks about the performance of its latest dual-core Montecito chip
Written by Rupert Goodwins, Contributor

Following Intel's first public demonstration of its latest Itanium -- the dual-core Montecito -- Abhi Talwalkar, general manager of Intel's Enterprise Platform Group, talked to ZDNet UK about future plans for the technology, how Itanium is currently being targeted and how the market for the chip will develop.

If you drop in a Montecito to a Madison-based system you claim around a 2x performance increase. Can we expect that sort of performance with each new generation of Itanium?
That's without recompilation -- you can get better than that if you recompile. For future performance increases, you have to look at Itanium history. From Merced to McKinley was a two times increase - that was an architectural change. From McKinley to Madison was 1.5x, a process shrink, but Madison to Montecito is an architectural change again with dual core and multi-threading and we're back to 2x.

It all depends on whether there's an architectural change or a process shrink. Montecito was a ground-up redesign to make it the best multi-core architecture. The cache size increase is another architectural change, but you'll see other changes beside that. There are good questions about what do you do if you have more than two cores -- what's the cache structure then? Montecito's cache is 24 megabytes (MB), it's really two independent caches of 12MBeach, fed by one bus interface. It's a pretty big bus.

How will Intel's lack of a chipset for larger servers affect the market?
We have our own chipset for two-way and four-way, but a lot of the development is greater than four-way and there the OEMs are developing their own chipsets. Most of these guys are RISC vendors, and they've been making investment in the RISC sides, and that's been more and more shipping over to Itanium2. NEC's a great example. NEC made a decision six or seven years ago to decrease investment on their proprietary microprocessors large scale systems and increase commensurately on It2. There'll be no white box activity above four-way Itanium2.

Won't that space be filled by Xeons?
There are 16 and, I think, 32 way Xeons in that space. Most of the scale-up work is in the Itanium2, because of our architectural advantages in the highly parallel EPIC architecture. We scale up a lot better, we have virtualisation that'll show up first in Montecito, a great technology for scaling up platforms. We have RAS advantages [reliability, availability and serviceability] in the silicon itself.

Talking of virtualisation, what is the relationship between Silvervale and Vanderpool (virtualisation already demonstrated on client processors)?
Silvervale and Vanderpool have the same baseline technology, because we don't want to drive two different efforts from software vendors. There'll be specific enhancements for the server market, specific needs they have. But we want to keep that as seamless as possible, so the core software development work that companies have to do for that virtual machines. There'll be a base set of capabilities, a base architecture that's consistent across both and then you'll see a divergence, optimisation for client and server side.

How will the introduction of the Common Platform Architecture affect the dynamic between Xeon and Itanium?
The common platform architecture, in 2006, will have a common bus and packaging [between the two chips] but the processors will remain different architectures. So OEMs that want to can design one chipset that covers both processors, for all the multi-processor and multi-core configurations, on the common bus. We haven't released any details on the bus, but we're sharing it with customers and OEMs that are developing large scale chipsets.

Are targets being adjusted for Itanium now from those you had at the beginning of the year?
From the consumption standpoints, the consumption standpoint has been pretty good. The other thing we're very pleased with is the growth of large scale systems. HP's systems started hitting the market more aggressively earlier this year, and we're starting to see large scale systems from other companies with Itanium hitting the market. So this year, end user deployments and wins are probably 2x the pace of the last year. We have a goal of getting at least 50 of the global 100 deployed, and I think we're at 38 to 40 by now.

How about very high performance computing?
Intel architecture has taken that world by storm, with tremendous growth. 285 of the top 500 are now Intel. It's not where our primary focus is. That market is roughly six to seven billion market, but the server market is about 50 to 55 billion in size. It's a critical market, in that all sorts of interesting technologies are developed and incubated there and work their way down to the server market – clustering capabilities, scale-out capabilities are going to make their way from HPC down to business.

What about Mike Fister's [Talwalkar's predecessor, who left Intel last month] promise to drive the costs between the Xeon and Itanium down to parity?
That's about driving costs down through the overall cost structure, the commonality of power delivery systems, the chipset, other hardware, the common platform concept, these economies will help bring the cost points down. Right now we've got a very clear strategy. IBM's the target. We want this thing to be, and I think it's becoming, a two-horse race.

Editorial standards