The University of Western Australia (UWA) has welcomed a high-performance computing (HPC) cluster to its Perth campus.
The Pople supercomputer has taken up residency within the Faculty of Science to aid students with computational chemistry, biology, and physics, benefiting research in areas such as gravitational waves with its big data processing capabilities.
According to Dr Amir Karton, head of the computational chemistry lab at UWA's School of Chemistry and Biochemistry, the Pople HPC places the faculty in a unique position for supporting their advanced research.
"For example, it will be used for conducting multi-scale simulations of biochemical processes, studying gravitational waves, and simulating combustion processes which generate compounds important for seed germination," Karton said.
"Such research could have been previously carried out only on national supercomputers -- now these capabilities are accessible to any researcher in the Faculty."
The university said Pople comprises 2,316 cores in the compute nodes, comprising 7.8 terabytes of main memory and 153 terabytes of local scratch disk.
Dean Taylor, the faculty's systems administrator, said Pople was specifically designed for carrying out large-memory and data-intensive applications involved in computational chemistry, biology, physics, and big data research.
"For this purpose it contains compute nodes with up to 512 gigabytes of RAM and large solid-state disks," he said.
1,896 cores using Intel Xeon CPUs were donated by Perth-based geoscience company DownUnder GeoSolutions, with the company saying it was its way of investing in the future.
The Australian Bureau of Meteorology (BOM) signed a AU$77 million supercomputer contract in July with American manufacturer Cray.
The new Cray XC-40 supercomputer is expected to be up and running at the BOM in mid-2016, replacing the ageing Sun Microsystems machine, which was commissioned in 2013.
The Cray XC-40 supercomputer runs on a Linux-based operating system, specifically designed to run large, complex applications and scale efficiently to more than 500,000 processor cores; and the specific model to be installed at the BOM will be comprised of 2,160 compute nodes, with 51,840 Intel Xeon cores, 276TB of RAM, and a usable storage of 4.3PB.
Earlier this month, CIO and deputy director information systems and services Lesley Seebeck said that the BOM was taking advantage of the supercomputer to redesign its systems.
"We are working closely with the Australian security agencies as we always do, given that whenever we get or upgrade our systems we try and take into account security," she said.
"So what we're doing at the moment with the supercomputer is taking advantage of the fact that this has been installed and is undergoing acceptance testing at the moment, and we're designing our systems around that to ensure that it is hardened and resilient as we can possibly make it, because we have one operational system that's in the interests of all of us to ensure it is secure."
In September, the Department of Defence's Defence Science and Technology Group (DST Group) went to tender, seeking a high performance supercomputer to support aerodynamic simulation and execute its Computational Fluid Dynamics simulations.
DST Group requested a Linux system using either 64-bit CentOS 6 or Redhat 6; that the compute nodes be homogeneous; and that x86-64 processors must be used within each compute node.
The tender added that the standard non-turbo frequency of every CPU will be 2.6 GHz or higher, and the memory must supply maximum memory bandwidth to the CPUs, including a memory capacity that will allocate a minimum of 3 GiB per core.