X
Business

NCI doubles supercomputer throughput with Power

Australia's national research computing service has reported an increase in performance since integrating IBM's Power Systems with existing x86 architecture.
Written by Asha Barbaschow, Contributor
raijin-supercomputer-nci.jpg

NCI's Raijin cluster

Image: Supplied

The supercomputing market is largely dominated by x86 architecture, of which Intel boasts the majority of the market share. According to manager of high performance computing (HPC) systems and cloud services at the National Computational Infrastructure (NCI) Dr Muhammad Atif, when there is only one big vendor, they do their own thing, which results in certain applications or features not enabled or not present in their architecture.

As a result, NCI turned to IBM to boost the research capacity of the biggest supercomputing cluster in the Southern Hemisphere, Raijin, which is currently benchmarked at clocking 1.67 petaflops, Atif told ZDNet.

NCI, Australia's national research computing service, purchased four IBM Power System servers for HPC in December, in a bid to advance its research efforts through artificial intelligence (AI), deep learning, high performance data analytics, and other compute-heavy workloads.

The upgrades added much-needed capacity to the Raijin system, Atif explained. He said that while the Intel CPU was fast, it could not be fed data at the same rate as it was chewing it up.

"Because memory is slow, you are feeding data through it a bit slow, which slows down the entire processing," he explained. "With IBM Power 8 architecture, what we found is that it's able to feed processor data at much higher rate than normal x86 and this has resulted in applications starting to scale well."

"IBM is actually two years ahead of what Intel was delivering us -- two years is an eternity."

NCI integrated IBM's Power Systems with Raijin's existing x86 architecture and has observed up to a two to three-times improvement in performance when running on Power 8 architecture, compared to the latest x86 architecture installed at NCI, Atif said.

NCI staff have also been working closely with Australia's scientific community to qualify a range of memory-intensive applications for use under IBM's architecture.

For example, NCI is the first institution to port Q-Chem, a quantum chemistry package, over to Power Systems. Atif said initial benchmarks of this optimisation have outpaced the same application running under Broadwell x86 architecture.

With 35,000 researchers in total on its books, the NCI operates as a formal collaboration of the Australian National University, the Commonwealth Scientific and Industrial Research Organisation (CSIRO), the Australian Bureau of Meteorology (BOM), and Geoscience Australia, as well as through partnerships with a number of research-intensive universities that are supported by the Australian Research Council.

Atif said currently, Raijin boasts over 4,500 active users.

With a paradigm shift taking place towards machine learning and artificial intelligence, Atif said NCI must ensure that it is well-placed to meet the new "technological frontiers".

For now, NCI researchers are utilising the IBM-provided nodes for memory bandwidth, but, as familiarity with the technology increases, it is anticipated that the IBM Power Systems nodes will present the opportunity for those researchers to explore the intersection of AI and HPC across a wide range of scientific applications.

"Our users are already discovering the benefits of these new technologies in regards to big-data analytics and some scientific simulations," he explained. "When people think AI they think in terms of Siri, Cortana, or Google assistant -- talking to a system and getting results -- in scientific applications, it's totally different."

"Previously, science tried to find a needle in a haystack, but now data is so big that your first question is: 'Where is my haystack?' AI actually helps you narrow down where actually your haystack is and then you can actually do scientific analysis and go deeper using your calculations to find your needle."

He said AI and machine learning is not about speed; rather it's about reducing the time otherwise spent performing calculations and coming up with results.

"The scientific community is yet to use machine learning and AI in a real sense ... but work is going to happen in the next four-five years," he said, highlighting medical imagery as one example.

Supercomputers enable researchers to look at life on a molecular level, including atoms in the human body, proteins, space, and the climate on Earth.

However, to simulate the explosion of a single star on a standard home computer, estimates indicate it would take more than three years just to download the data. Supercomputers like Raijin, which Atif said run at a rate equivalent to 40,000 desktops running simultaneously, allow scientists to quickly compile, analyse, and visualise these incredibly complex simulations in a fraction of the time.

"Supercomputers enable research that is practically impossible any other way," Atif told ZDNet. "By connecting thousands of computer processors together with lightning fast connections and huge data repositories, we open up a whole new realm of research possibilities."

PREVIOUS AND RELATED COVERAGE

It takes two to Tango: eRSA partners with Dell EMC for new research supercomputer

The non-profit IT service provider has welcomed a new high-performance computer and research cloud infrastructure, Tango.

Swinburne reaches for the sky with AU$4m astronomical supercomputer

The Dell EMC-powered supercomputer will be used by the university to power a new age of gravitational wave astrophysics, building on Albert Einstein's theory 100 years on.

Fujitsu to provide Kyushu University with new supercomputer for AI

Japan's Kyushu University will be using its new supercomputer system to ramp up research in areas such as artificial intelligence.

Editorial standards