X
Innovation

Trace a falling ray and put it in your pocket

The next Intel Developer Forum is less than a month away, which means its time to sort out visas (this one's in Shanghai), tick the boxes for one-to-ones with Intel execs, and start refreshing the little grey cells on what's hot, what's cool and what's frankly absurd in silicon.One of Intel's favourite horses is ray tracing, the graphics technique where you synthesise a scene by modelling how light travels through it.
Written by Rupert Goodwins, Contributor

The next Intel Developer Forum is less than a month away, which means its time to sort out visas (this one's in Shanghai), tick the boxes for one-to-ones with Intel execs, and start refreshing the little grey cells on what's hot, what's cool and what's frankly absurd in silicon.

One of Intel's favourite horses is ray tracing, the graphics technique where you synthesise a scene by modelling how light travels through it. It's very powerful and produces some ultra-realistic results: at the levels that matter here, optical physics is very well understood and amenable to good old fashioned number crunching. No surprise, then, that a company which makes its money by selling number-crunchers is very keen on the idea.

The trouble with ray tracing is that even with modern processors, it's still very expensive to produce real-time ray-traced images at high enough qualities. Intel has shown some stonking demos at previous IDFs, but they've always been years away from the sort of thing you could afford to stuff in your desktop.

Ordinary processors have already been through the price performance paradox - you need a big market to make the price low enough, but the price needs to be low enough to create a big market - yet this singularity remains stubbornly out of reach of the sort of massively parallel manycore teraflop chips required for desktop real-time ray tracing. Don't forget that Moore's Law has a hidden variable - cash - without which the whole thing fizzles out.

Which is why Intel is getting interested in ray tracing on mobile devices. The amount of oomph required to generate a scene is directly proportional to the number of pixels you have to push, and while we've all been taught by the market that top-end graphics on the desktop requires multiple millions of the wee beasties, the mobile experience is happier with far fewer. This fortuitous restriction makes it very possible to create devices that do real-time ray tracing without breaking the bank. Intel's thinking is that this could be the way into the market: by focusing attention and design dollars on making efficient ray tracers for portable gaming devices, phones and UMPC-style boxes, it becomes much easier to create the right environment to scale everything up to full size. Equally fortuitously, the stubborn problems of making big stuff go faster has led to much more attention being paid to making the small stuff work more efficiently, meaning it's no longer daft to think about portable supercomputing.

(As an aside: I'm getting increasingly excited by this whole area and the concept of integrating supercomputer-class modelling systems into physical devices, as well as any complex system that matters. I want a Met Office level forecasting system for my diet, bank account, local council, travel... But that's another story, and one still to be grappled with. 2010 will be the start of the Decade of the Supermodel. Watch this space.)

But beware. There's still a strong whiff of "We can do this, so it must be done", and no guarantee that ray tracing will prove a better fit to mobile graphics than any of the other more established techniques, even if it does provide a tempting roadmap for using oodles of otherwise useless operations per second.

Editorial standards