This year the first octa-core smartphone will hit the market — but as the number of cores inside mobiles and tablets grow, so does the toll on the battery.
The problem with cramming more cores onto processors is that energy efficiency doesn't scale with the number of cores stacked onto chips. As more cores are added, power consumption grows faster than performance.
If you were to put a 16-core processor into your average modern smartphone, the maximum battery life would fall to three hours, while if you used a 100-core processor it would drop to just one hour, according to back-of-an-envelope calculations by a team of researchers from UK universities.
Tackling the rapacious appetite for power of many-core processors is relevant to more than just letting computers do more on the move. As an increasing number of cloud services such as Gmail, Salesforce.com and Spotify are accessed over the internet the need to keep down the energy demands of datacentres full of densely packed server clusters is also becoming pressing.
If unaddressed, the rising power consumption of many-core processors may limit future increases in computing performance, with predictions that within three processor generations CPUs will need to be designed to use as little of 50 percent of their circuitry at one time, to limit energy draw and prevent waste heat from destroying the chip.
Worrying about how to limit the draw of processors with hundreds of cores might sound academic, something that won't be an issue for mainstream processors for more than a decade. But many-core processors don't seem quite so distant when you consider there are already octa-core processors in desktops and servers – as well as specialist many-core processors such as the Xeon Phi Co-processor — and Moore's Law predicts there will be processors with more than 16 times the transistors of today's chips by 2020.
The International Technology Roadmap for Semiconductors 2011, a roadmap forecasting semiconductor development drawn up by experts from semiconductor companies worldwide, forecasts that by 2015 there will be an electronics product with nearly 450 processing cores, rising to nearly 1,500 cores by 2020.
Chip designers are coming up with novel ways of pushing down power consumption in multi-core devices. For instance, Arm's big.LITTLE configuration, where a high-performance, high-power consumption processor is paired with an energy efficient, weedy processor.
However, pairing energy-sipping with energy-hungry chips can only reduce power draw so much, according to professor Bashir Al-Hashimi, of the Electronics and Computer Science department at the University of Southampton, who is the director of a new project looking at a longer term solution to the problem.
The University of Southampton is part of a group of universities and companies – including UK chip designer Arm and Microsoft — in the PRiME (Power-efficient, Reliable, Many-core Embedded systems) project. The project will examine how processors, operating systems and applications could be redesigned to allow CPUs to more precisely match their power consumption to the application they are running.
"In the long term, the focus should be not just the hardware, the system software needs to become much more intelligentand work co-operatively with the hardware," Al-Hashimi said.
He said that ensuring processors were not sucking up more power than they needed at any one time would require a lot more intelligence in how operating systems manage the power consumed by CPUs. Some current power reduction techniques, for example clock and power gating, are deployed when the chip is being designed, and invoked when the chip is in use.
"This happens at design time, so requires good understanding of or predictions about the type of application one would run in order to look for opportunities to reduce the energy cost of computation, or eliminate it where there's no useful work being done," he said.
PRiME will investigate a dynamic model of power management, where processors would work in conjunction with the operating system kernel to shut down parts of cores or adjust the CPU's clock speed and voltage based on the precise needs of the application running on the processor at that moment.
This dynamic power management would require current computer hardware, operating systems and applications to be enhanced.
Additional circuitry, such as performance and energy counters, would need to be added to processors to capture more data on how much work a CPU was doing and how workloads were distributed across cores. This data could include the level of current being consumed and the operating frequency of each core.
This data would then be interrogated by the operating system to capture a snapshot of the load on the processor and how the CPU was handling it. Interrogating this data in detail would require changes to power management routines within the kernels of operating systems.
Lastly, each application would likely need to come with a profile that described its power and performance needs to the power management system in the OS kernel. Al-Hashimi said one option would be for this to be generated during the application's development, using software tools that would estimate the app's performance needs on a given processor architecture.
All these changes would allow an OS to constantly monitor the performance and power usage of a CPU, scaling its power usage to precisely match the needs of the app outlined in its profile. This adjustment would take place via methods such as reducing the clock speed and voltage flowing to the CPU and shutting down parts of cores. More sophisticated manipulation of processors — for example, switching between heterogeneous cores, a la Arm's big.LITTLE, and homogeneous cores — would require further modification of processor hardware.
Research will be carried out by different UK universities. Imperial College London will research the hardware enhancement and reconfiguration, the University of Southampton and Newcastle University will investigate optimisation of the software runtime management.
To help investigate power and reliability optimisations of many-core systems, the university will build a 1,024-core system with the help of the University of Manchester, which will contribute its knowledge of highly parallel systems, building off its work on the SpiNNaker architecture.
Researchers at the University of Southampton have also begun software work, modifying the Linux kernel power management system to try and capture data on what are the power and performance demands of an MPEG decoder.
"We're trying to get our intelligent power management to learn the task it's doing. We're trying to label tasks and how much they cost in terms of clock cycles. Based on that we will decide at what speed the processor needs to be operating at," Al-Hashimi said.
PRiME is a five-year project being undertaken by research groups from the Universities of Southampton, Imperial College, Manchester and Newcastle. The five-year is being funded by a £5.6m grant from the Engineering and Physical Sciences Research Council (EPSRC).
As well as investigating ways of dynamically adjusting power consumption, the project will also investigate ways of altering the how an application runs based on a profile describing how important it or the data it's handling is. For while a flipped bit in a processor register may need to be corrected when running flight software onboard a plane, it is less likely to need fixing in a tablet playing a video.