X
Innovation

Aust supercomputing undergoes renaissance

Australia's supercomputing partnerships are collectively spending tens of millions of dollars on new equipment as supercomputing undergoes a renaissance fuelled by increased demand and the development of commercial cluster computing operating on Linux.
Written by Iain Ferguson, Contributor
Australia's supercomputing partnerships are collectively spending tens of millions of dollars on new equipment as supercomputing undergoes a renaissance fuelled by increased demand and the development of commercial cluster computing operating on Linux.

The Victorian partnership revealed today it plans to issue a request for proposal to the market for new advanced computing equipment, with the RFP due out just weeks after the Australian Partnership for Advanced Computing (APAC) receives responses to its RFP for an AU$12.5 million, four-year contract for a new peak computing system. Release of the APAC RFP itself came just weeks after the South Australian Partnership for Advanced Computing (SAPAC) acquired a new AU$4.5 million Silicon Graphics Altix 160-processor supercomputer.

The state-advanced computing partnerships -- which include SAPAC, the Victorian Partnership for Advanced Computing (VPAC), the NSW-based Australian Centre for Advanced Computing and Communications (A3) and the Tasmanian Partnership for Advanced Computing, among others -- are typically founded by universities and governments to provide supercomputing facilities for research purposes. APAC is a national partnership of eight organisations -- one in each state, as well as the Australian National University (ANU) and the Commonwealth Scientific and Industrial Research Organisation. (CSIRO).

VPAC today told ZDNet Australia   it plans to put out an RFP in October for advanced computing equipment in the AU$500,000 to AU$600,000 range, with its intention to secure a cluster system with up to one teraflop capacity. (Typically a teraflop capacity refers to a computer's capacity to handle a trillion floating-point operations per second.)

VPAC managers said they hoped to secure the equipment by November and take delivery by early January. It will replace an existing four-year-old HP AlphaServer machine.

According to the APAC RFP, issued 30 June, responses must be submitted by 1 September. The purchase replaces two HP AlphaServer systems -- one an SC system (with 127 nodes incorporating 508 processors, 700GB memory, and 14TB disk -- 4TB in global storage and the remainder in local storage) and the other a GS1280 (with 16 processors and 16 GB memory) -- and will be made by the Australian National University on behalf of APAC.

APAC head John O'Callaghan said the new APAC supercomputing system was expected to have processing capacity of up to 10 teraflops or, put another way, be able to undertake 10 trillion calculations per second.

O'Callaghan said APAC had so far received "six to eight" responses to the tender, with the "usual suspects" in the supercomputing space having put their names forward.

A supercomputing renaissance?
VPAC's chief executive officer, Bill Appelbe, said the evolution of supercomputing -- most lately from so-called "homebrew" clusters to large-scale proper commercial clusters, generally operating on Linux -- combined with the desire of the commercial and academic communities to solve more complex problems had seen the sector enjoy "a bit of a renaissance".

"Before about 2000, almost all supercomputing was on proprietary software and hardware," Appelbe said. "This greatly limited application portability and set a high barrier to usability and ease of adoption of supercomputing".

However, the gains in supercomputing power since around 2000-2001 had far outstripped Moore's Law, Appelbe said, with supercomputing organisations in the top 500 worldwide -- based on processing capacity -- jostling and leapfrogging each other on a constant basis.

Appelbe said demand had risen most sharply from the commercial sector, albeit from a small base, while academic demand had also increased.

The chief executive officer said the stakeholders in APAC -- as well as the national organisation itself -- hold roughly 50 percent of the supercomputing power available in Australia, with the remainder spread between government and private-sector bodies. Chief users of supercomputing power include the large-scale engineering, mining and exploration and automotive industries.

O'Callaghan attributed the healthy demand in the Australian supercomputing arena to high and increasing pressure from researchers, noting APAC could only meet one-third of the demand from the community. That was, he noted, after researchers had scaled back their requests for access to processing power in the hope of having more modest versions accepted, meaning the true level of demand was significantly higher.

The grid project
The release of the APAC tender followed the Australian government's announcement of a grant of AU$29 million to support the next stage of APAC -- from mid-2004 to mid-2006.

One of the key roles being undertaken by APAC is development of a grid infrastructure integrating APAC and partner facilities, designed to help researchers gain "seamless" access to computational and data resources in the national facility.

According to APAC, the grid project will also give researchers access to a new range of services to support research collaboration, nationally and internationally.

O'Callaghan said there were six application projects underway across six areas -- astronomy, high-energy physics, bioinformatics, earth systems, chemistry and geosciences -- as well as three involving development of infrastructure to support the applications.

Some services may be available as early as later this year or early next, he said.

Appelbe said the grid computing project would provide greatest benefit to applications requiring hundreds of runs or simulations, with each simulation saturating a processor for many hours.

"Examples include vehicle crash simulation -- for all angles of impact of two vehicles at all speed ranges -- what angles and speed ranges cause the most harm to occupants of varying height and weight and seat location".

However, other projects may not benefit from the grid structure, he warns.

"Grid computing allows the individual simulations to be 'farmed out', but specialised grid middleware is needed to control farming out the jobs and data, monitoring the jobs and marshalling the results back to the user".

Editorial standards