X
Innovation

Power saving - another case for open source

One of the more intriguing demonstrations at IDF this spring was of an approach Intel is investigating into saving yet more power at the system level.
Written by Rupert Goodwins, Contributor

One of the more intriguing demonstrations at IDF this spring was of an approach Intel is investigating into saving yet more power at the system level. One of the most important basic techniques, says the company, is to put as much of the system to sleep as often as possible: when you think how much isn't happening during most business IT - the screen is static, the computer waiting for keyboard or mouse input, the network resting between browsed pages - then the opportunities for tiny comas are many indeed.

Unfortunately, the basic PC architecture isn't set up to help. There's a whole background blizzard of interrupts coming in from peripheral circuits that exist precisely to attract the main processor's attention and wake it up from whatever it was doing. There's a buffer of data arrived from a disk. There's a clock event. That packet you sent - it's gone now.

What Intel says it's found - and knowing a bit about x86 interrupt processing, I can believe in wholeheartedly - is that this constant storm of events severely limits the benefits from CPU sleep modes. Moreover, although every interrupt contains a certain sense of "Look at me NOW", very many of the events they announce can be safely ignored for a few milliseconds. Also, loads of them only need a tiny bit of processing: spending all the power and time getting a processor out of sleep to do a few hundred instructions, then going back to sleep only to be woken immediately afterwards, is really not very efficient. Anyone with small children will know the problems.

So, says Intel, the trick is for the processor to tell its peripheral circuits that it will accept interrupts only within certain time windows. This forces the peripherals to effectively synchronise their interrupts: the processor wakes up as before, but finds plenty of work to do - and can then go back to sleep for a useful length of time.

The demo we saw claimed power savings of around 30 percent by this technique alone. Moreover, there were some unexpected side-effects related to the business of peripheral servicing: having a faster, higher-power processor could result in less power being used overall, because it screams through the tasks and lets many of the subsystems themselves go back to sleep sooner than before. You can see why Intel likes this as an idea.

However, this interrupt herding isn't something you can just do. Some interrupts really do have to be seen to straight away. The protocol by which the processor tells the peripherals what windows are acceptable - and by which the peripherals say that no, they need something else - isn't going to be trivial. There may be conditions where processing and reliability are badly impacted: dealing with interrupts in complex systems is a field famed for containing tigers in the long grass.

In particular, there's going to be a whole host of implications for the low level areas of the operating system, especially in the hardware driver interface, which is the part of the stack where interrupts and data flows tend to be knitted together. Ideally, Intel will start to introduce systems with this power-saving idea there as an option, allowing systems designers to begin to experiment and conduct their own tests. There'll have to be a lot of co-operation across different companies - device drivers will have to appreciate the needs of other devices to help assure best use of the new environment - as well as close working with kernel architects with a rapid iteration of ideas and results.

I can see how to do this with Linux and other open source OSs. Indeed, Linux in particular has a great tradition of encouraging innovation in the lower levels of the system. People do this a lot. I know the mailing lists and forums where they live, I can see many examples of how new ideas have got out there, and where there are problems - nobody can pretend Linux device drivers are free of pain - I can see how people work with them already.

I can't see how to do this with Windows. I wouldn't know who to talk to, or where to start, if I was Intel wanting to get this new idea out there, if I was a peripheral manufacturer wanting to be part of it, and even if I were Microsoft and wanting to help make all this happen.

Actually, I can see how the Windows world can adopt these new ideas within a framework that has widespread buy-in, cross-industry experience and practical proof of concept. Wait for open source to make it work, then adopt the results. But that's it.

Or have I missed something?

Editorial standards