Asia's HPC space needs to catch up on software

Summary:Region has many top supercomputers but need to build up skills to catch up with United States which is 50 years ahead, says Nvidia exec.

SAN JOSE--Many top supercomputers have been built in Asia, however, the industry will need to work on developing high-performance computing (HPC) skills to catch up with the United States.

In a media roundtable here Wednesday, Sumit Gupta, senior director for Tesla GPU computing at Nvidia, said the U.S. was about 50 years ahead in HPC skills compared with the Asian region due to the former's longer history working with supercomputers.

To address this, Gupta urged for more training and specialized workshops to bring the required skills to this region. "The bottom line is, the more young engineers and scientists get to be educated, the faster the adoption will be," he said.

He noted that the Chinese government has been "doing a good job" by building supercomputers, as well as providing funds for engineers to develop their HPC skills. While this is the right step, he added that there needs to be more training.

In Asia, for research centers that are building high-end HPCs such as Tianhe-1A in China, K Computer and Tsubame 2.0 in Japan, the cost of building a supercomputer is "not a big deal", said Gupta. Small and midsize universities, however, can benefit from more affordable HPCs, he noted.

The executive said Nvidia has addressed this need, enabling research labs to build a supercomputer with 1 petaflop per second performance with 10 racks, using its new GPU (graphics processing unit) computing architecture, Kepler.

In contrast, 42 racks of the older architecture, Fermi, were needed to build the Tsubame 2.0 supercomputer which also has a performance of 1 petaflop per second, he said.

In addition, Kepler provides better energy efficiency. Gupta said the Tsubame 2 supercomputer required 1.3 megawatts of power but a Kepler-based 1 petaflop super computer needs only 400 kilowatts.

Intel's architecture a "good PowerPoint"
Queried about how competitor Intel's many integrated core (MIC) architecture will affect Nvidia, Gupta said: "Intel's MIC is a really good PowerPoint slide."

Noting that Intel's processor was built specifically for the HPC space, he said: "Any product that is developed specifically for HPC cannot sustain on its own."

According to him, Intel's MIC would require a server workstation to operate while Nvidia's product could be operated with a laptop.

Intel did not respond to ZDNet Asia's query about its HPC roadmap.

Server players support hybrid supercomps
At a pre-briefing, Gupta noted an increase in GPU-accelerated applications after Nvidia launched its Fermi architecture in 2010. He shared that the use of GPU in top super computers stood at under 20 percent in 2011 but increased to slightly less than 60 percent after 2011.

The "big jump" in performance provided had attracted supercomputer centers to use a hybrid computational architecture which includes central processing unit (CPU) and GPU chips in the same HPC system, he said.

Stephen Bovis, vice president and general manager for industry standard servers and software at Hewlett-Packard Asia-Pacific and Japan, agreed that GPUs were gaining acceptance in the marketplace, especially among organizations that needed to accelerate application delivery.

"GPUs are able to speed up delivery from between 10 and 100 times, depending on the application itself," he said.

"However, not all applications will run on GPUs as it takes time for an application to be ported to run on GPUs", Bovis added.

Arun Ulag, general manager of server and tools division at Microsoft Asia-Pacific, added that the combination of GPU and CPU was "a great way" of using multi-core processing.

"In this co-processing model, the compute-intensive portions of an application use the parallel computing capabilities of the GPU, while the sequential part of an application's code runs on the CPU," Ulag explained.

He said several GPU offerings from multiple vendors including Nvidia and AMD could be used for general-purpose computation in a Windows HPC Server 2008 cluster. These products include Nvidia Tesla 10-series GPUs, Tesla 20-series GPUs, AMD FirePro V8800 computing processor, and AMD FireStream computing processors.

An IBM spokesperson told ZDNet Asia GPU was just one of several variations of accelerator for HPCs. He pointed to the IBM Roadrunner which uses a blend of Cell processors and AMD x86 chips, adding that other technologies include field-programmable gate array.

Liau Yun Qing of ZDNet Asia reported from Nvidia's GPU Technology Conference in San Jose, United States.

Topics: Hardware, Servers

About

The only journalist in the team without a Western name, Yun Qing hails from the mountainy Malaysian state, Sabah. She currently covers the hardware and networking beats, as well as everything else that falls into her lap, at ZDNet Asia. Her RSS feed includes tech news sites and most of the Cheezburger network. She is also a cheapskate mas... Full Bio

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Related Stories

The best of ZDNet, delivered

You have been successfully signed up. To sign up for more newsletters or to manage your account, visit the Newsletter Subscription Center.
Subscription failed.