What those secrets are, I cannot say -- although I was there, we're sworn not to divulge the information until the products are launched. But I can reveal that we were looking at Xeons, the chip that Intel hopes will corner large chunks of the server market, and that Intel is pinning its hopes on the increased performance of the new design. In particular, hyper-threading: a technology that Intel has already said is in one of the chips, the Xeon Processor MP, and the world knows is already in Pentium 4s but disabled.
Hyper-threading is still something of a mystery. The basics are public: every Pentium 4 -- and thus Xeon, which has the same basic architecture -- has nine execution units inside, the bits of the chip that actually do sums, carry out logic, check results and so on. Although the Pentium goes to great lengths to keep those busy by carrying out multiple instructions at once, it can only cope with one lot of programming code shuttling through its vitals. This isn't enough: on average, only 35 percent of the chip is actually working out problems at any one time.
We know that hyper-threading adds extra circuitry that copes with a second stream of program code and data -- a thread, in the parlance -- simultaneously. To the operating system and applications, it looks as if there are two completely independent processors on the motherboard, and if the software is designed to make use of this then the processor will use a lot more of its capabilities at the same time. I'd love to quote some figures here, but you don't argue with nervous men in dark Soho bars. I'm convinced, though: it's a neat trick, it makes sense and pulling some papers that Intel researchers have written on the subject proves that they know what they're talking about.
The big question -- and we did ask -- is why the reticence over making hyper-threading available on the desktop, especially since the Pentium 4 is ready to rumble? Well, said our informant, it only works if the software knows about it. Most software outside the server world doesn't, so there's no point until we've got all the developers up to speed.
We didn't think that a very good answer, to be honest. The real reason is most likely that if you turn the feature on, software that doesn't need multiple threads will go slower. It's a downside of much of the Pentium 4: Intel has stretched the performance of its existing designs to the limit, and it's now forced to add stuff that has the potential to zoom ahead, but only with software that knows about it. Old software -- which is everything you and I run today -- will go slower. It's like the move to digital TV: once everyone's switched over there'll be loads more channels for everyone, but the need to invest in the infrastructure now means that less money's available to keep the analogue side of things up to scratch.
So Intel is in a bind. It has invested in a good idea, but if it promotes it in the wrong way it'll be a sitting duck for critics who'll turn it on, run benchmarking software and pronounce it a retrograde step. And you can't expect AMD to ignore that.
Fact is, things are going Intel's way. This week, Open magazine tested Intel's latest C/C++ compiler, the piece of software that turns programmers' code into executable software. This is exactly where Intel's claims will live or die -- if the company itself can't produce software that makes good use of the new features of the Pentium range, then nobody else has a chance.
The results bore up everything that Intel claims: the magazine's own benchmark software performance improved by between 30 percent and 50 percent. This more than wiped out the advantages AMD has enjoyed; its chips run around 20 percent faster than comparable Pentiums, but not with this code. Of course, it's always open to AMD to produce a compiler that does this sort of aggressive optimization for the Athlon, but for now it looks as if Intel is back on course.
What this means to you and me is less clear. It's always been the case that raw clock speed is no real guide to how quickly something will run, but now we have to ask ourselves what chip, what operating system, what software, and what compiler produced that software, before we can get a feel for how a particular combination is going to perform. The days of slotting in a disk and waiting for numbers to pop out are going fast; the chip companies know it and they're not quite sure how to sell the story. We're going to have to take a lot more salt with our benchmarks this year.