X
Innovation

GPU vs CPU: wrong battle, wrong war

Intel is out on the road, preaching a new way of looking at processors. Well, it's not so new - it's a repackaging, in shinier polygons, of what has most recently been called the CISC vs RISC argument that's been going since mainframe days.
Written by Rupert Goodwins, Contributor

Intel is out on the road, preaching a new way of looking at processors. Well, it's not so new - it's a repackaging, in shinier polygons, of what has most recently been called the CISC vs RISC argument that's been going since mainframe days. Briefly, the conflict is between making things general purpose and thus mildly inefficient at everything, or specialising in one task and doing it extremely well while having enough firepower to tackle other things in clever but awkward ways.

These days, the two camps have pitched their tents on the opposing peaks of general computation and graphics processing. The real motivation, as always, is more fiscal engineering than silicon cleverness - ordinary CPUs have low margins and lots of competition, while GPUs sell for many multiples of the price of their boring siblings and have a far less crowded market. The logic Intel seems to be following is that if more applications used GPUs, the company would sell more and make more money. And applications should do that, says Intel, because GPUs are so much faster than CPUs.

And if you're a chip company with an extensive history of engineering leadership, such logic seems compelling.

There are just three problems. FIrst, it is exceptionally difficult to make GPUs do anything well except what they were designed to do - enormously parallel operations on very large data sets - and while this means you can build scientific and engineering supercomputers from a few thousand pounds' worth of parts you can pick up at Maplin's, this has only cheered up jobbing engineers and scientists. I'm extremely happy about that - the engineer in me is constantly in awe at the sheer muscle involved - but there just aren't that many jobbing engineers and scientists out there. The market isn't mainstream, and even if it were to become so there's no reason that mainstream economics wouldn't apply.

Second: CPUs are scooping up GPU hardware cleverness faster than GPUs are acquiring the CPU's facility with mainstream software - not difficult, as the latter isn't happening at all. Looked at in the right light, Nehalem has many of the features of a classic GPU - tightly coupled memory, multiple threads, lots of vector processing ideas - but just happens to be a low-margin product which runs ordinary software.

Third: as quickly as programmers and system designers learn how to tie parallelism into the mainstream, it goes mainstream. For the GPU revolution to happen there has to be not only a sea change in programming methodology, but one which doesn't benefit CPUs like Nehalem to any great extent.

(I'm assuming here that there isn't some new class of software waiting off-stage that does something extraordinarily compelling and that will only run on GPUs. Halo 3 doesn't count.)

Intel, as always, is hedging its bets. Larrabee, the sea-of-processors manycore design, is slated to appear as a GPU first, but is in the right light very CPUish (most of those cores will be x86). And while it is admirable and understandable that the company's head of GPU stuff is touring the US banging the drum for his own brand of magic, it remains true that for 99 percent of non-gamers, the only thing even the ruftiest-tuftiest GPU will do for us today is sit in our computers and soak up the watts. With nobody able to say when that'll change, it's going to be a long time before the battle even looks worth the fight.

Editorial standards