X
Tech

Is supercomputing just about performance?

perspective Conflicting views on what that 'P' in 'HPC' really stands for, reflect important changes taking place within the field of supercomputing, says expert Andrew Jones.
Written by Andrew Jones, Contributor

perspective Originally the P in HPC stood for 'performance' and, to professionals in the field HPC will always mean 'high-performance computing'--also known as supercomputing.

As one of those professionals, and as a slight pedant, it always irritated me when people took the P to mean 'power', as in 'high-powered computing'. "It's performance, not power..." I would mutter through gritted teeth.

Power is electricity, not speed; and electricity is just a bill and some logistics in the background. But over the past few years, that misnomer has become painfully accurate: electrical power has become as important as performance in HPC.

Of course, the P still stands for 'performance', but power is now one of the deciding factors in planning, deploying and operating HPC services.

Another P, 'productivity', has also emerged over the past year or two. Some people have tried to mutate HPC to stand for 'high-productivity computing', to emphasize that it is not only the raw compute performance at your disposal that counts but, more importantly, how well you are able to make use of that performance for your business objectives. In other words: how productive is it?

Productivity matters
While I stand firm on 'P equals performance', I support the focus on productivity. Indeed, I have held forth on the topic on a number of occasions over the past few years.

The term 'high-productivity computing' tries to recognize that performance matters across the whole ecosystem of HPC, not just the hardware and software deployed.

For example, how usable is the HPC service for the end users? How efficient, in terms of human time, is the process of developing and verifying software for the system? How does the performance achieved--measured in business output, not a mere metric such as teraflops--compare with the cost in capital, human effort and electricity? Is there effective support for any pre- and post-processing of data, for example, visualization? What about computational efficiency--are the applications using the processor, memory and interconnect optimally, or can the software be tuned to run faster on the hardware? And so on.

Of course, not all these factors will be important for every business. Some lucky users of HPC have another budget holder paying, say, the electricity, so they can exclude that element from the first cut of their productivity assessment. Likewise, if the HPC service runs only off-the-shelf software, then availability of that software matters, but the ease of porting and developing is not a direct concern in terms of user productivity.

It's all too easy to focus HPC budgets on the hardware and, increasingly, the electricity. That's understandable. The hardware is something that buyers and their bosses can clearly see has been obtained with the money; you can kick it, although I wouldn't recommend it.

We have also enjoyed many years during which buying updated hardware every year or so delivered increased application performance and even measurably enhanced productivity.

Looking to the future
Of course, it should now be obvious that this free lunch of easy performance increases is over, as the industry looks to deliver more cores, more parallelism and more diverse processing capabilities, rather than higher clock rates.

Investing in other aspects of productivity--such as people, processes and software--may seem less concrete and provides nothing kickable, but in reality the business case may be even easier.

Applications tend to live much longer than any hardware, so the return on investment from a major software development continues to deliver over many years, not just the typical three-year lifetime of a hardware deployment.

Innovations in algorithms have delivered as many orders-of-magnitude increases in performance for some applications as has the Moore's Law hardware race over the past few decades.

As Moore's Law throws a new twist at us--twice as much parallelism every 18 months, instead of doubling clock rates--the need for investment in software, to be sure of future performance from the new direction of computing hardware, is stronger than ever.

So productivity is the goal, but productivity needs performance to deliver.

As vice president of HPC at the Numerical Algorithms Group, Andrew Jones leads the company's HPC services and consulting business, providing expertise in parallel, scalable and robust software development. Jones is well known in the supercomputing community. He is a former head of HPC at the University of Manchester and has more than 10 years' experience in HPC as an end user. Jones contributed this article to ZDNet Asia's sister site ZDNet UK.

Editorial standards