X
Tech

Lack of training hinders GPU in HPC

Adoption of graphics processing unit in supercomputers hindered by dearth of knowledge in parallel programming and support from software vendors, observes Nvidia exec.
Written by Liau Yun Qing, Contributor

SINGAPORE--The lack of training in parallel programming as well as support from independent software vendors (ISVs) are key obstacles hindering wider adoption of general-purpose computation on graphics processing units (GPGPU) in industries that use high-performance computing (HPC).

Simon See, chief solution architect and director of solution architect at Nvidia, noted that both research and production sectors use HPC for modeling and simulation but the two sectors face different challenges in the adoption of GPU computing.

For researchers, the main challenge lies in the lack of training in parallel programming, said See, who was speaking to reporters at the GPU Technology Conference here Thursday. While it is possible for researchers to write their own computational codes for HPCs, most are not able to "think in parallel programming" which prevents them from applying the technology, he said.

On the other hand, those in the production sector such as oil and gas and manufacturing are still dependent on their ISVs to develop applications for HPCs, he explained. Therefore, companies in these verticals can only switch to GPGPU if and when the ISV supports the technology, he added.

He noted that Nvidia currently runs various programs in efforts to drive knowledge in parallel programming, pointing to the chipmaker's CUDA Centers of Excellence, CUDA Research Centers and CUDA Teaching Centers.

In addition, the company secured backing from a HPC software vendor last November when Platform Computing said it would support Nvidia's hardware.

The integration of GPGPU in HPC gained prominence when it boosted China's Tianhe-1A to the pole position in the Top 500 supercomputer index.

Bertil Schmidt, associate professor at the school of computer engineering in Nanyang Technological University (NTU), also backed the use of GPGPU in HPC. In fact, the NTU's research labs introduced GPGPU in 2005 because the architecture offered faster computing power, said Schmidt, who was also at the GPU conference.

Currently, he added, GPU provides the fastest computing capability as well as the best results in terms of performance and cost.

According to a Chinese researcher, while GPGPU is helpful in parts of scientific computing, it can be difficult to use if programmed wrongly. Ge Wei, from the Institute of Process Engineering at the Chinese Academy of Science, used GPGPU in his process engineering research and advised other researchers to explore the benefits of the technology before adopting the platform as well as to change their algorithm to suit the new hardware.

While See acknowledged that GPU-based computing is still lacking in the enterprise space, he noted that there has been research to indicate how enterprise database can benefit from such technology. He added that some industry players are also researching on whether GPU-computing can solve the database growth issue, as there are parts of algorithm in databases that can be scaled.

While not popular in enterprise database yet, GPGPU has been used by players in the financial sector, especially in risk analysis, to simulate the stock market and make financial predictions, he said.

Editorial standards