X
Innovation

NCI delivers science cloud for data-intensive research

The National Computational Infrastructure has built an on-demand, high-performance cloud computing environment to process data-intensive computations, such as climate change, earth system science, and life sciences research.
Written by Aimee Chanthadavong, Contributor

The Bureau of Meteorology, Commonwealth Scientific and Industrial Research Organisation (CSIRO), Geoscience Australia, and the Australian National University (ANU) are among a handful of scientific organisations that routinely perform data-intensive computations. This work takes place on infrastructure hosted at the National Computational Infrastructure (NCI), housed at the ANU in Canberra, which is also home to Raijin, one of Australia's fastest supercomputers.

However, given the scale and reach of the investigations such as climate change, earth system science, and life sciences research, NCI was tasked with building a high-performance science cloud node for both the Australian government's National Collaborative Research Infrastructure Strategy (NCRIS) federated initiative, NeCTAR, and for NCI's partner organisations. The NeCTAR cloud is physically located at eight different organisations around the country, but operates as a single federated cloud.

According to NCI cloud services manager Joseph Antony, the design goals for the NCI's "science cloud" were to establish a cloud capability for the organisation that would complement and extend existing investments in supercomputing and storage, as well as to help overcome any end-user shortcomings of a supercomputing environment.

"There was specific emphasis on creating environments that would allow people to have the flexibility and the performance guarantee they would get on a supercomputer," he said, noting that the supercomputer is a large instrument for computational experiments and the cloud would be where all the post-analysis work is conducted.

As a result, NCI signed an AU$2 million contract with Dell to supply a 3,200-core compute cloud using Intel Xeon CPUs housed within 208 Dell PowerEdge C8220 and C8220X compute nodes, 13 Dell PowerEdge C8000XD storage nodes, and PowerEdge R620 rack servers.

In turn, NCI built a cloud using OpenStack, Ceph, and low-latency Ethernet that was able to spin up and down at 56GB per second, a rate that not even Amazon or Microsoft's Azure is able to accomplish, according to Antony.

"Even on Amazon, the fastest thing you can get is 10GB, and if you go on Azure, it's probably 40GB, but on our cloud, you can go to 56GB and that's probably the fastest interconnect you can have," he said.

"Importantly, we've coupled this with SSD, so with these nodes, you can do very intensive applications such as aligning gene sequence of data, or climate applications, or working on reconstructing computing tomography imagery like 3D, so it's cued for high-end work."

Antony said it was necessary to take into consideration that the OpenStack-based cloud had to be flexible to ensure that it would be able to support dynamic workflow processes that the supercoumputer isn't able to do, and there were two main driving reasons behind it.

The first was to ensure that even if the supercomputer is running at 90 percent to full capacity, when highly data-intensive information is suddenly fed through, such as during a natural disaster including a tsunami or earthquake, the compute environment is still able to react easily to the information.

"The computing environment we have right now is very throughput orientated, so you line up a lot of jobs and eventually queue them up and send them through this machine ... as a result, you have to make sure the system is up and running, and it's constantly chewing through jobs," he said.

The second reason was to ensure that this new cloud environment would be able to string together existing complex environments.

"In scientific computing, unlike business computing, you have software that is developed and designed by different groups around the planet; someone would've written it in Windows, another person would've written it on a Linux, and as a result you have all of these complicated software stacks.

"You need to somehow glue them together, and virtualisation gave us that ability to essentially encapsulate all of these workflows in the software stack, and you can actually link it up using a distributed system," he said.

A total of AU$101 million was injected into establishing the hardware infrastructure, as well as the support tools and virtual laboratories used by the infrastructure. The Australian government's NCRIS program provided AU$47 million as a co-investment to NeCTAR, and this was matched by the Australian education sector with AU$54 million.

On the other side of the country, Perth-based Pawsey supercomputing centre has installed the final-stage upgrade of the supercomputer Magnus, which has also been designed to support data-intensive research.

According to Pawsey supercomputing centre executive director Dr Neil Stringfellow, a third of the supercomputer is reserved for research into geoscience, minerals, and resources, but it also supports applications in areas of nanotechnology, radio astronomy, high-energy physics, architecture and construction, multimedia, and urban planning.

The upgrade of the co-funded facility by the federal government, the Western Australian government, the CSIRO, and four universities has elevated Magnus into being able to deliver process power in excess of a petaflop — one quadrillion floating point operations per second. This gives users access to over 35,000 cores, using the Intel Xeon processor E5-2600 v3 product family.

Stringfellow said the facility has been designed to incorporate initiatives that minimise the impact on the environment and reduce energy usage. One example of this is that the facility sits above an aquifier and the centre takes advantage of the water from the ground — which sits at an average temperature of below 21 degrees Celsius — for cooling. The heat captured from the facility is exchanged through the water, and is pumped back down to the aquifier at a slightly warmer temperature of approximately 25 degrees Celsius.

The Pawsey supercomputing centre project was supported by AU$80 million funding as part of the Australian government's measures to support national research infrastructure under the National Collaborative Research Infrastructure Strategy and related programs through the Department of Education.

Editorial standards