X
Tech

GPU to the future

It’s time to state the plain truth, something that’s being hidden from us by special effects and shiny chrome: Computing as we know it is dead. We’re working with zombie operating systems that lurch in search of fresh cycles on top of the rotting corpse of Moore’s Law.
Written by Simon Bisson, Contributor and  Mary Branscombe, Contributor

It’s time to state the plain truth, something that’s being hidden from us by special effects and shiny chrome: Computing as we know it is dead. We’re working with zombie operating systems that lurch in search of fresh cycles on top of the rotting corpse of Moore’s Law.

We’re so used to PCs getting faster every year because processors get faster, that we’re shocked to find that the latest software feels slow and bloated. That’s because PCs haven’t got any faster over the last couple of years. In fact, core clock speed for core clock speed, they’re slower – and as very few programs take advantage of multi-core CPUs, software is actually run slower now than it was then.

Intel may have pulled the hafnium over our eyes, turning down the clock speeds and adding more and more cores, but we’ve hit the Moore’s Law wall. Things have stopped getting faster, and are now just getting more efficient. There’s not really much scope for improvement, either. Smaller die sizes mean that quantum effects suddenly start making silicon unreliable – and that’s not going to help with running nuclear power stations on off-the-shelf servers.

(And if any Intel folk are reading this, we’re not part of that anti-multi-core fringe, we just realise that programming across several cores is very hard indeed – and traditional procedural code doesn’t parallelise well, even with tools like your Parallel Studio. Until we get a generation of developers familiar with multi-threaded parallel application development, we’re going to be stuck with applications that don’t take the advantage of the silicon they run on.)

There is some hope for speed, though, and Microsoft’s PDC announcement that the IE9 rendering engine will be GPU-based using the Direct2D APIs is a sign that the mainstream has finally started to deliver on the promise of GP-GPU computing. Rendering text is a start – managing TCP/IP pipelines and working with the DOM surely can’t be far behind, especially with DirectCompute and OpenCL getting a lot of attention. That’s because GPUs are yet to get anywhere near Moore’s Law’s Wall. Talking to silicon architects at large software companies, they all seem to agree that there’s little future for improvement on CPUs – and that 8 cores is close to a practical limit for multi-threaded development platforms at the moment. However they also agree that there’s at least another 5 years’ worth of speed up in GPU silicon, before it faces the problems that the CPU has run into.

Yes, parallelising to GP-GPU isn’t easy too, but those massive arrays of compute that spend most of their time generating Office screens are ideal for several classes of data-intensive parallelism. That’s where technologies like DirectCompute, OpenCL and CUDA come in. They make it easy to offload processor and memory intensive code off the CPU and onto the GPU. That’s not just a saving in time, it’s also a saving in power, in heat – and it is all capacity that can be used by the rest of the application to improve other aspects of performance.

It’s time to learn a whole new set of APIs. Our users will be a lot happier.

--Simon

Editorial standards