X
Tech

GPU to be mainstay in supercomputers

Graphics processing units will become more commonplace in future supercomputers as more vendors and users see the value of heterogeneous computing such as better performance and energy efficiency.
Written by Liau Yun Qing, Contributor

The use of graphics processing units (GPUs) in supercomputers might not be prevalent in the market yet, but uptake is likely to increase in the near future as more manufacturers turn to heterogeneous computing, which includes both computer processing units (CPUs) and GPUs in a computer, predict two professors.

Satoshi Matsuoka, professor at Tokyo Institute of Technology's global scientific information and computing center (GSIC), pointed out that most people used to laugh at the prospect of using GPUs in high-performance computing (HPC) as these chips were mostly used for running games.

However, as these chips are also able to simulate physics-based realities as well as process general purpose computing, there are benefits to be reaped when GPUs are included in supercomputers, he said.

More vendors are recognizing its benefits was well, with Matsuoka pointing out that as of November 2011, 39 out of the Top500 supercomputers were a combination of CPUs and GPUs. This was a jump from just 17 supercomputers in the June 2011 list, he added.

Jeffrey Vetter, professor at the Georgia Institute of Technology's school of computational science and engineering, agreed that GPUs will become commomplace within the HPC arena.

He said that while GPU use is still emerging within the industry, it will be commonly integrated into systems five years from now.

The researchers were in Singapore for a supercomputer workshop organized by Singapore's Agency for Science, Technology and Research (A*Star) Computational Resource Centre.

Beyond computational prowess, there are also energy efficiency benefits to be reaped, Matsuoka pointed out. Citing the example of the Tsubame 2 supercomputer, he said one of the main design briefs when the team he led started on developing the machine was that it had to be 10 to 20 times more energy efficient than its CPU-based predecessor.

By integrating GPU chips in Tsubame 2, the second-generation supercomputer was able to perform more efficiently at 1,264.2 megaflops per watt (MFlops/watt) compared with Tsubame 1's 98.06 MFlops/watt peak performance, he said. This allowed Tsubame 2 to be ranked 10th in the Green500 supercomputers list--a list that its predecessor could not even get on, he added.

Pros and cons of heterogeneous computing
However, Matsuoka noted that building a heterogeneous computing system has its challenges as well. When building Tsubame 2, his team had to work with a vendor to design new nodes to accommodate the processing power of the GPUs. If not, the "engine", referring to the processing capability of the GPUs, would be too powerful for the "car" or supercomputer, he explained.

A heterogeneous system would also need the right storage and networking equipment to exploit the power of GPUs, the professor added.

To make the shift from existing CPU-based HPC to heterogeneous computing, he said there are four areas that will need to be worked on. The first aspect will be to ensure the algorithms fed into the supercomputer will able to suit the new programming environment.

At the same time, developers will need to change the programming models or languages used in developing their software, and this is something the GSIC has been working on with, the professor said.

Users, too, would have to be skilled in learning how to use parallel algorithms in order to get the most out of heterogeneous computing, he said, while adding that the industry will need to adopt new languages to bypass the complexity.

Alternatively, users can make use of prepackaged applications that are GPU-enabled, he suggested.

Editorial standards