The Big Interview: Pat Gelsinger

Intel's Pat Gelsinger on the future of Itanium, technology in the developing world and the one-chip blade server of tomorrow

Pat Gelsinger is one of Intel's best-known executives. In nearly 30 years with the company, he has led the design team for the 80486 processor, produced the ProShare videoconferencing system and been chief technical officer during some of the most exciting — and troubled — times for the chip manufacturer. Now in charge of Intel's biggest and most important division, the Digital Enterprise Group, he's responsible for both the Core and Itanium processor lines.

In London for a conference on power management in data centres, Gelsinger sat down with ZDNet UK to discuss a wide range of issues, in particular how Itanium is converging with the new Core architecture, how quickly the developing world is catching up in supercomputing, and the debate he doesn't expect to see concluded before 2020.

Q: How are you getting on with moving developments in the Core architecture onto the Itanium?
A: Itanium used to be a shared development process between HP and Intel. We've consolidated that with the agreement we announced two years ago, which allowed us to integrate all of the Itanium development activities and get a consistent development methodology. Since that move, we've basically hit all of our timings — the first Montecito slip aside, we're back on track. />

Part of that on-trackness means we can leverage the same circuit design libraries, process technologies, all of those other things we were not doing a good job with before. So going forward, the circuit techniques, the power-management technologies, all those sorts of things are much better leveraged. The first realisation of that is Tukwila [quad-core Itanium] in late 2008, the next step in the product family, where we move to common system architecture elements, as well as full alignment on design tools and process. It's still a different microarchitecture, a different instruction set, still aiming at a different market segment than the core of our product line. I'm driving for more convergence in Poulson [post-Tukwila Itanium] and beyond.

Presumably the cache architectures are converging as you move to a common bus?
Yep. You just get more and more, and some of the differences that we had before weren't for good reasons, and we're bringing those together, so I'm pretty happy that this gives us much better leverage for the R&D investments. As you move to common systems architecture it's much better investment for the customers as well. HP can say: "I can do a platform development, so I have a lower-end Xeon platform that can be used to bring Itanium lower in my product line", so you start to get that not just in our developments, but also in the OEM developments.

Do you see convergence continuing to the point where there's one chip with a mode bit [making it compatible with Core and Itanium]?
I don't see it getting that far, but I am driving these things to be as common as possible.

How's the heterogeneous versus homogeneous multicore debate going within Intel?
I expect that debate to be going until 2020, and I expect — in my crystal ball — different market segments coming to different conclusions in that discussion. You can clearly envision, and this is an easier discussion to have after IDF [Intel Developer Forum] than it is today, so we'll have to have the next instalment of this discussion after 17 April, but you can see the lower end of the product line having homogeneous, little cores. You could imagine the mid-range of the product saying: "We need some big cores, for performance, but little cores are more efficient for certain portions of the workload. You can imagine some embedded applications where you have big cores but with some special purpose cores for other, specific applications, maybe XML acceleration or packet processing or other things like that. A range of building blocks from little cores to big cores to special purpose cores. You now have a fabric of choices to mix and match for the market segment.

Won't you need some complicated design and verification tools to maintain a large library of very complex cores? Is that a limiting factor in the speed at which you can develop them?
Sure. Verification is already a limiting factor. That ends up being the rate-limiting portion of new products coming to market. That continues to be the case looking forward, although I do expect that to be helped by formal verification methods and formalisation of on-die interconnect. What happens is that near the CPU is a great sucking sound...

Some CPUs suck more than others...
In multiple respects... ah, anyway, the CPU starts hauling everything in. What you saw as the system architecture yesterday, tomorrow is the on-die architecture. As that starts to come together, some of these formalisations, interfaces, etc become part of the die. It's not that far away until you'll see the one-chip blade.

On 45nm, you've just announced a new transistor design. How far will it take you before you have to have another look at the transistor architecture?
There's the structure of the transistor and the materials of the transistor. The materials we just announced, the move to hafnium and metal gate is good for quite some time. We don't expect to change the material structure for a while — improve it, tune it, perhaps, but it's going to last us for several generations. In terms of the structure of the transistor, we're already been talking about changing to a tri-gate structure, changing the physical structure... have better control as well as to have a surface area for the leakage. We're looking to make changes, that's one of our key research areas over the next couple of generations. We're good at 45 [nm], not likely to change at 32, but beyond that it's pretty likely we'll look at a new structure.

You've talked a lot about power and environmental factors in the data centre. What's Intel doing?
We've been doing a lot as a company ever since Gordon Moore; he had a penchant in this direction. We plan our own operations in terms of environmental efficiency, we sponsor a lot of initiatives in the industry, and obviously our energy-efficient product line has been a big deal for us.

How well is the move to more efficient computing going?
You need metrics to measure it, like any of these kinds of things there's lie, lies and benchmarks. We've worked on SPECpower, vConsolidate, ecomark, which have all been important efforts for us in defining how things work. We've had a good success with a number of the big data centres and started on our own operations. What we've seen is this incredible densification of the data centre, and it's led to the compute space being compressed by something of the order of 20 times over the past decade.

Generally, the thermal envelopes have gone down by about two [times], but because the computing space is getting denser you're seeing almost 50 times the amount of power density. That's pretty stunning. Data centre managers are putting 100 servers where they used to have 10, and the amount of compute you're getting in that space is typically two times what you had before, so with Moore's law and other microarchitectural improvements, the performance you're delivering is pretty stunning.

Where are the tools for power management?
Intel wouldn't claim that we've solved all of those problems, but we're also working with the key OEMs, HP, IBM and so on, as well as working directly with some key users, giving them our BKMs — our best known methods — and applying them to their environments. You'll see a number of different announcements in the very near future, to put these ideas under a broader umbrella.

Are there major differences in data centres around the world?
The developing world isn't as far behind as you might think. Their sophistication in planning and building their data centres is rapidly catching up to the mature markets but there's still a gap. One unexpected key sign is that every one of the major emerging countries — Russia, India, Brazil and so on — has major high-performance computing projects as well as major megadata centre efforts underway. You're seeing Baidu trying to position itself as the Google of China, you're seeing China and India putting petaflop programmes in place to be in the front edge.

Why can't they leapfrog, as with communications, by taking everyone's best practices without their legacy?
I don't see them leapfrogging at this time, but I see the five-year gap we used to expect become a much shorter gap in these scenarios, maybe a one- or two-year gap at this point. But they're coming on strong. It's amazing. You go and see a Baidu data centre and it's pretty impressive. But you look at India saying, "we're going to have a petaflop machine in 2008", that's pretty impressive for a country that not long ago wasn't even in the high-performance computing race, and they could be literally number-two or number-three in the world, They see the challenge in racing China as well as looking north, and both of those has brought a lot of impetus in installing IT infrastructures.

When do you see power becoming an important issue for smaller data centres, ones with handfuls of servers?
If you're just talking 30 or 40 servers then power's not that big a deal. Only hundreds of dollars' difference per year. But people are environmentally concerned, so they're putting those priorities ahead of just the savings associated with them. If I ran a Google data centre I could be talking about millions of dollars of operational costs per year, plus as a company they're trying to position themselves at the front end as eco-friendly and environmentally conscious, as part of their corporate positioning, and I think you're going to see that trend increase. We're seeing the digitisation of industries. Amazon is becoming a retailer of mammoth proportions. Google's out to digitise the world. The environmental impacts of these data centres are increasingly concerning governments, as environmental issues become more important in general.


You have been successfully signed up. To sign up for more newsletters or to manage your account, visit the Newsletter Subscription Center.
Subscription failed.
See All
See All