Intel plans for parallel-programming universe

Intel's James Reinders outlines the chipmaker's strategy for convincing more developers to switch to parallel programming
Written by Adrian Bridgwater, Contributor

James Reinders, Intel's head of software development products, sets out some of the company's plans for encouraging more developers to switch to parallel programming.

With many programmers barely touching multithreaded, parallelised application development, ZDNet UK attended April's Intel parallel computing conference in Salzburg to ask the company's chief software evangelist James Reinders how far we are from the widespread uptake of these programming techniques.

In his role as director of software development products, the 20-year Intel veteran is in a good position to comment on the realities of life at the parallel-development coalface and the challenges that await developers.

Q: You are launching your Parallels Studio developer toolkit in May. How do you answer critics of these so-called abstraction-driven, power-programming tools who say you are leaving developers ultimately less skilled and more distant from the programming process?
A: Parallels Studio is not a dumbed-down toolset, if that is what you are suggesting. It is actually a more complex product than our previous tools. Developer power users can still drill down to the command line if they wish, and we know that some of them will always do that. But the twin goals of parallelism are correctness and scaling, so providing the appropriate abstraction techniques for maintainability and future proofing is key.

You have said developers do not tend to wake up one day and wish they could just start working with parallel code. What do you feel will be the feature that attracts programmers to your new toolkit?
I actually think the key draw will be the option to debug parallel code and identify memory leaks with our Parallel Inspector tool. Although that might not initially sound like the number one consideration for making threaded development techniques a practical reality, this product allows developers to get a view straight to their code inside a Visual Studio environment and zero in on making a parallel program work predictably.

Adding memory-leak detection to the capabilities of our previous Thread Checker product makes for a more complete parallel-programming universe.

When scaling to parallel, we can work out hardware parameters accurately on paper in the lab, but how do we cope with software which is inherently difficult to validate outside real-world use?
I have worked in this space for many years and I have learned to think in parallel terms. I appreciate that some people sometimes find this hard or even impossible.

I spoke to a client recently who had been having trouble seeing where parallelism could improve his program, and I finally figured out where the independence was inside his system and saw where we could separate out individual threads.

The trick is you sometimes have to modify the way you think sequentially to do this. It is about a switch in algorithmic thinking, but it does become more intuitive over time.

What kind of negative feedback do you receive from developers who do not want to learn new techniques, and how do you see these objections being overcome?
Game developers used to stick religiously to handcoding in assembly language and initially baulked at the idea of using a compiler, even though it has now become the norm.

We gave these developers compilers back in the mid-1990s with the right level of abstraction that did not compromise on performance. They had the control they wanted, but were not overburdened with details they did not need. So people do come around to new work methods.

If they want to learn how our products work, high-end developers are welcome to ask us about the scheduling algorithms behind our Threading Building Blocks library. This is a group of generic constructs to help write scalable parallel programs, but we find that most programmers are happy to use it at the higher abstracted level only.

You are very much aligned to Windows through your reseller channels and rather less well known for your contributions to parallelism for the open-source market. What work have you done in this area and why?
We open-sourced Intel Threading Building Blocks in July 2007 after it was initially launched in August 2006 as a commercial product. But I was always pretty sure this product would go to open source and it has proved to be very popular with C++ developers who want to get their hands on the additional tools they needed to add parallelism to their applications.

We did this to make sure that the product would be around forever and also to make sure that it could be ported across multiple processors on different platforms. It has become by far the most popular method to do parallel programming in C++.

When we introduced our Linux compiler, which is not open source, more than five years ago, some people saw this move as us competing directly with GNU. Yes, ours is an alternative to theirs and offers some advantages in terms of performance and processor support — but at the end of the day, we are helping make Linux more credible by creating this product.

If we asked the community whether they wanted us to withdraw from the Linux market and just focus on Windows, people would say 'no'. Our Linux compilers have grown to be extremely popular with developers for applications on Linux. You do not have to be an open-source project to show support for the open-source world.

What do you plan for the immediate future in parallel programming?
We will remain focused on C and C++, so that we can make data parallel programming a reality that developers can use to build predictable programs that scale well across lots of cores. We also will announce a new product based on Intel's Ct multicore programming tools in beta by the end of the year that will feature high-level abstraction techniques.

No recompile or processor-specific work will be required from developers who want to use the product across a wide range of parallelism, and its design will be structured to handle irregular problems in data-intensive computing environments. That product will expand our support for parallelism much like our Threading Building Blocks product did, and I think it will spark a lot of debate about the practicalities of parallelism in the real world.

Editorial standards