X
Business

Grid computing saves lives, but could do so much more

With great power comes great responsibility. That's an angle of advanced CPU ownership that Intel and AMD rarely stress, one that the people behind grid computing can't avoid
Written by Leader , Contributor

Each of us has computing power under our desks or on our laps that a 1970s university would have envied, and it's shameful to leave it idle so much of the time. That's the thinking behind grid computing, more specifically the sort of distributed project exemplified by FightAIDS@Home, which got going on Monday. This takes the mathematically intensive task of modelling interactions between drugs and HIV, and divides it between as many PCs as have been volunteered.

This is an effective way of using spare resources in what is arguably a more pressing task than that chosen by the pathfinder grid computing project, SETI@Home. That hunted for alien signals in radio telescope noise — it has evolved into a more sophisticated piece of software, BOINC, which can split its attention over multiple projects — including drug searches.

Laudable stuff, but still far too crude. These grid projects share downsides. First, they introduce new security worries — more than enough to curtail their use in many CPU-rich corporate environments. They're not easy to mix and match: users cannot grant priorities between tasks running under different grid systems. They encourage PCs to be left on and running full-tilt, contributing to another great humanitarian crisis in the making — the balance between energy use and environmental change.

One solution is to move the basic grid functionality away from the application layer and into the operating system, even virtual hypervisor space, where it can benefit from the best security and process management in the OS while working most closely with the fine-grained power control systems now being evolved. That's satisfying from an architectural viewpoint, practically feasible and has many additional benefits.

This would not only let tasks migrate themselves to whichever system was most efficient at any particular time, but the computers themselves could move processing requirements away if they knew that local power was temporarily more expensive — thus helping load balance across power grids, potentially the world. This would be a major step forward in more efficient power utilisation as a whole.

It would also open the way for the distributed projects to measure their watts per instruction — a figure none of them currently advertise — and thus improve their own efficiencies. Users could give preferential time to more efficient processes, encouraging competition for ever lower consumption.

Operating system designers should take these ideas on board, and soon. Rather than see ever more feature bloat to no good end, they — and we — must recognise that computer users have social obligations too, and that following them should be a recipe for a better environment for everyone.

Editorial standards