Successive generations of single-core processors no longer show much performance improvement — Moore's Law has run out for the single core, at least in terms of what can be done within the heat and power restrictions of ordinary chips.
Moore's Law is still alive and well in other ways; by slowing each core down a little but adding many more cores, processors can still show important performance gains. To use these cores effectively, though, software must be designed to run, as far as possible, as a set of independent threads, streams of computation that can execute independently of — and at the same time as — each other. That's multi-threading, and, without it, software won't go faster with the new chips.
Hang on, I thought there was a certain natural level of multi-threading in modern computing environments anyway?
Yes, ok — with most systems today you do achieve a certain level of multi-threads by running multiple applications, such as an operating system, network software and security software, along with the main applications. But this is a deeper consideration at the individual application level. For multicore processors, performance is achieved by having several cores that can perform multiple tasks (called threads) at one time. To keep the cores busy and achieve the highest level of performance, software needs to generate several threads that can be processed simultaneously. For a single application to get the most out of multicore processors, applications need to express parallelism. This requires developers to understand parallelism and how to identify and exploit it.
So multi-threaded applications are built with complex co-ordination parameters, meaning that, when bugs develop, they cause greater impact, are harder to find and harder to fix?
It's going to be a more challenging world to program in for sure. But programmers can use development tools designed to help them create multi-threaded code. For example, compilers developed by Sun and Portland Group have built-in features that help to create multiple threads. There are also performance libraries comprised of highly threaded routines that programmers can use when developing code. Developers can also use a managed code environment, like the Java Virtual Machine, which allows an application to have multiple threads of execution running concurrently. But, without a doubt, experience and good design will help the most.
It still sounds like more trouble than it's worth. I think I'll just develop my applications my way and ramp them up later with some parallelism.
Bad idea. Parallelism can seldom be added effectively as an afterthought. The key is the way we think. We have decades of serial programming that have reinforced a certain way of building applications that make expressing parallelism more difficult than necessary. We need to all work to think parallel. The good news is we will see that serial thinking created many issues which parallelism can address.
What are the training and certification options open to software developers keen to align themselves to the world of parallel processing?
Online resources, like the Intel Software Network, host a community of developers engaging in dialogue with experts and peers facing similar challenges and sharing best practices. Check out Intel Software College and also perhaps visit its classroom training schedule and registration site. Unsurprisingly, there are many informal resources, such as blogs, forums and programmer communities, devoted to parallel programming — a good start is Thinking Parallel.
So, if I don't learn to think parallel and learn to use new tools related to abstraction of thread management, then am I out of a job?
That's probably a bit strong, but this is the way the industry is going. Independent software vendors are working with the big chip manufacturers and, by and large, are embracing parallelism. The developer early in their career should view the parallel-programming transition as an opportunity to jump ahead.
Is it just the usual suspects? What about the rest of the industry?
There are of course other companies working to push forward chip technology, such as Samsung, Silicon Graphics, Texas Instruments, Infineon Technologies and NEC, but much of their work is in either the mobile space or specifically Flash memory chips, rather than core processing "power" chips. Sun and IBM are also working very hard at multicore chips and the associated programming issues but, because they have no presence in desktop or portable products, they tend to be overlooked. One exception is the Cell processor IBM developed in conjunction with Sony and Toshiba, a nine-core chip currently found in the PlayStation 3 and also destined for the next-generation supercomputer.
Why are we constantly building a technology landscape that requires more and more power? Where will all this end?
Most people say that it won't end. We are always looking for ways to solve new problems and make life easier and better. So, as long as computers can help do that, they will get more complex. End-user demand for immersive experiences, more timely information and higher productivity are a few of the pulls. Intuitive application user interfaces, life-like graphics, bigger data sets and more sophisticated analytics and searches demand more processing power. This will fuel demand for processors into the future. The form and function of computers will continue to evolve and the hunger for more processing power is not going away.