X
Business

Should programming supercomputers be hard?

Those who glibly argue for easier programming of supercomputers are broaching a complex issue, says Andrew Jones
Written by Andrew Jones, Contributor

Many in the supercomputing community want programming to be made easier, but the fundamental issue is far more complex than that, says Andrew Jones.

Most people agree that programming parallel computers is hard, especially if performance at scale is required — that is, if it involves a large number of processors. So it is hardly surprising that the question of how to make programming supercomputers easier is a popular topic at high-performance computing (HPC) conferences.

It is unarguable that parallel programming skills must become more common for two reasons. First, parallel hardware — whether multicore processors or graphical process units — is here for the foreseeable future. Secondly, wider use of HPC is critically important to scientific and industrial advancement. I define HPC as computing at a performance level substantially beyond a typical individual workstation.

Helpful analogy
But a supercomputer is not just a fast laptop, and we must stop trying to treat it like one. In an attempt to press this point during a panel discussion session at a conference recently, I tried an analogy. Analogies can be both powerful and full of holes — especially those made up on the spot. My colleagues on the panel were kind enough to point out the holes.

This is what I said: with a few weeks' training, it is possible to drive a car effectively, without having any significant knowledge of the science or engineering of how it works. With many months of training, followed by on-the-job experience and a fair understanding of how it works, it is possible to fly a plane safely. Driving a space shuttle requires many years of training and a deep understanding of most of its systems and underpinning science.

The analogy to HPC is thus: it should be possible to program and use individual scale HPC with minimum training and only the most basic understanding of how it works. The invasion of multicore processors has made this harder, but we must look for tools and technologies to swing personal HPC back to simplicity for the programmer and user.

Programmers, or pilots, of larger HPC systems should be expected to have a reasonable level of understanding of HPC technology, and accept that more indepth training and expertise will be required to use them effectively. Users — passengers — should still see simplicity. Clearly, technology has a role to play in making the HPC-at-scale easier for the programmer, but simple is a goal too far.

The largest supercomputers in leadership-class facilities should quite rightly be programmed by experts with a deep understanding of the architecture and engineering involved, and how to make them perform well. It is reasonable that access to a rare and expensive facility — a shuttle or leadership supercomputer — requires extra effort by the user in return for the advantage that leading-edge computing capability gives them.

Strategic facilities
I have also pursued this philosophy by arguing that while 'personal supercomputers' can be looked on as fast laptops, the essence of leadership supercomputers is that they are not computers as such. They are scientific instruments of discovery — or strategic business facilities — that just happen to be made from computer technology.

Like other leading scientific facilities, expertise in the facility itself should rightly be expected as a normal part of getting the best results from the facility.

Clearly, even the HPC experts who program the most powerful supercomputers will seek easier programming methods and tools, but the balance still lies with expertise, experience and performance — not with programming simplicity for non-HPC experts.

The tools, technologies and supporting software ecosystem for HPC experts who program leadership supercomputers are different to those required if non-HPC-specialists were the programmers. Recognising this difference could enable the HPC community to focus on creating the right tools needed for each class of programmer — small to mid-scale and leading-edge.

Expertise needed
To reiterate, for personal or medium-scale HPC, where the supercomputer is simply a tool rather than a specialist facility, we must work towards more accessible and powerful programming for non-specialists. But at the leading edge, the need for an in-depth understanding of the facility should not be seen as a bad thing.

Other leading scientific or industrial facilities expect specialist expertise — such as that found in, say, signal processing, wind tunnel engineering or cryogenics — as a fundamental part of the team, along with end-users or scientists. So why not supercomputers?

Rather than the analogy of trying to make the radio telescope simple to use for the astronomer to employ without the need for any understanding of signal processing on the team, why not accept HPC programming experts as an essential part of any science team trying to use leadership supercomputers?

As vice-president of HPC at the Numerical Algorithms Group, Andrew Jones leads the company's HPC services and consulting business, providing expertise in parallel, scalable and robust software development. Jones is well known in the supercomputing community. He is a former head of HPC at the University of Manchester and has more than 10 years' experience in HPC as an end user.

Editorial standards