X
Tech

Are multicore processors here to stay?

Intel director James Reinders looks at the power, memory, and instruction-level parallelism walls that are forcing the move to multicore processors, and explains why sometimes it makes sense to underclock, not overclock...
Written by ZDNet Staff, Contributor
My name is James Reinders and today we're going to talk about multicore processors, which are very much in the news.

I'm going to address a rather simple question that I get asked all the time, which is: are multicore processors a fad or are they here to stay? Are multicore processors something we need to pay attention to and are they going to continue into the future? Well the answer is yes, multicore processors are here to stay, so I'm going to talk a little bit about why that is and talk a little bit about a twist that mutlicore processors are likely to undergo as time goes on and that us programmers will need to pay attention to.

So first of all: why multicore processors? Well, one thing you'll hear a lot about is power. Power is one of the walls that we see, one of the reasons that we've quit increasing the clock rates as fast as we have in the past on microprocessors. There are two other concerns: instruction level parallelism wall and memory wall. So let me talk about each one of these quickly because each one of them is a motivation to move to multicore.

Power I think has been in the news more than the rest of them. Let me talk about the other two first.

One is instruction level parallelism. Instruction level parallelism is when a microprocessor finds a way to execute multiple instructions simultaneously to speed up activity. As we increase the clock rate on the processor the processor is able to find more and more instructions to execute at once.

The problem is that finding more and more instructions that can run in parallel from existing programs is a very difficult task and increasingly as the clock rates have been going up it has been more and more difficult to show the return on instruction level parallelism. So as clock processor clock rates have been going up we haven't got as large an increase in performance as we have in the past and that trend was just going to get worse and worse as we increase clock rate. So instruction level parallelism is just one of the problems that we have.

Another one is the memory wall. Processor clock rates have been increasing much much faster for some time than memory clock rates and that trend is just continuing. So again as we increased the Gigahertz rates on microprocessors we weren't seeing the return as much as we had in the past on performance, causing us to increasingly add larger and larger caches and other things that help alleviate some of the issues - but not all of them on memory, so multicore actually addresses this issue by slowing the increase in clock rate on the processor so that the problem doesn't get worse.

And then power. I think this is the one I've read the most about - that power consumption on microprocessors just had to stop increasing.

It's a rather interesting observation here that if you take a microprocessor and let's just start with a baseline; let's say we have a processor that consumes one unit of power and delivers one unit of performance. A lot of times people look to overclock processors and perhaps overclock processors to get a 13 percent improvement in performance and you can say well how much power does that take to do it? Well it's quite typical to increase power by 73 percent simply to get a 13 percent improvement in performance. This is taking the same microprocessor and dong nothing with it other than increasing its clock rate.

Historically we have done other things with microprocessors over time but at and given point in time if you just increase the clock rate overclocking a processor the power increase is rather substantial. It doesn't seem to pay off. Now some people will do this just to get a 13 percent performance increase and pay a 73 percent penalty in power, but that's not a great idea for most of us.

What's interesting about this to me is that underclocking is a very real possibility. So likewise if you were willing to accept a process that only gave you 87 percent, how much power would that consume? The answer is roughly a half.

Well this opens up a pretty easy suggestion: that is why don't you put two processors together that consume roughly the same power and yet be able to get yourself in a situation where the performance is in the order of 73 percent higher.

It's kind of a neat trick to take a processor lower its clock rate and get more performance out of it than you might expect and then put two of them together getting roughly the same amount of power consumption but almost twice the performance because you've got two processors. Now this is a big motivation behind dual core processors and then quad core processors and so on.

Now this is only of the tricks. Another tick is that as processor sizes shrink we can fit twice as many processors in the same area every couple of years, so that's another trick besides just lowering clock rate.

So we're not on a trend just to keep lowering clock rates for ever but we are on the trend to keep using the space for more and more processors because we're able to keep the power consumption at the same level.

Now one twist that I wanted to leave you with since we understand now that these three walls motivate multicore is that all the multicores we have seen so far take processors of roughly today's capabilities and just duplicated them: dual core, and we've seen quad core chips where the core are roughly the same power consistency as today.

Well one of then things we've seen people ask is what if you took a core of much smaller capability, maybe go grab a core of maybe ten years ago that takes up a lot less area - would it be more beneficial to put a lot of those on a die - very small ones? Well it turns out that that has its complications of not performing as well for today's programs and so people tend to favour today's out-of-order cores. However what if you combine these -- what if you had some small ones and some large ones and put them together? This is probably the future of multicore -- not immediately -- we definitely have quad cores on the market; I expect we'll have eight cores, maybe 16 cores, but somewhere along this continuum the idea of putting hundreds cores together is very appealing.

I don't think we'll see many processors come out with many small cores only, but the combination is intriguing; this is referred to as heterogeneous computing. It's a topic that we'll talk more about in later series as we talk about programming these because I think it's a very important trend to be aware of that this is probably where multicore will go and we need to be programming for it.

Editorial standards