Cambridge rents out HPC for the cloud

This 'democratisation of HPC' gets underway later this year, when Cambridge University begins leasing processing time on its Darwin supercomputer
Written by Nick Heath, Contributor

Supercomputing for the masses is on the horizon as universities prepare to turn high performance computing (HPC) in the cloud into a reality.

This 'democratisation of HPC' gets underway later this year, when Cambridge University begins leasing processing time on its Darwin supercomputer to small- and medium-sized businesses.

Paul Calleja, director of HPC Services at Cambridge, said it would unlock teraflops of processing power for organisations without the resources to build their own multimillion-pound server farm.

Darwin is a 20-tonne beast made up of 600 Dell servers, with 2,340 processor cores capable of 20 teraflops of processing power. This will increase to 30 teraflops with a forthcoming upgrade to Dell blade servers.

It powers complex simulations of everything from fuel combustion within engines to modelling pharmaceutical molecules for new drugs.

Speaking at a Dell roundtable on HPC, Calleja said: "Universities should be the mind of UK Plc. There are lots of SMEs who would like to have access to HPC, but cost has been a barrier to entry. We are already in talks with businesses in a range of sectors such as the automotive, risk management and pharmaceutical industries."

The service will be offered to SMEs during the next quarter of this year and other universities are also in talks with Cambridge about tapping into the Darwin's processing power.

The university is looking at putting commercial fibre into its network, which will provide multiple 10Gbps links, in preparation for launching the service.

Calleja predicted this could be the way of the future, with regional and central university supercomputer centres providing processing hubs that could be tapped by users across the country.

"Janet [the national education and research network] is looking at shared services for its data centres; it's only a small step to say 'let's put a supercomputer in there and have regional HPC centres'," he said.

Mass take-up of supercomputing will also be made easier by the drop in hardware prices, with high-cost proprietary HPC hardware giving way to off-the-shelf components and teraflops of processing power available for a fraction of the previous price.

Because Darwin is based on commodity hardware, such as Intel Woodcrest processors, it cost three times less than the Sun system it replaced, while proving to be 10 times faster.

Dr Chris Rudge, facility manager for the UK Astrophysics Fluids Facility at Leicester University, said: "We have just bought five teraflops of processing power for £100,000. High performance computing is cheaper than it used to be."

The flip side of this is the off-the-shelf hardware requires far more complex software to run in parallel across hundreds of processors, leading to scientists developing readymade code that can be adapted for different research projects.

But ultimately, the cloud model could spell the end for inhouse supercomputer centres at universities, with Calleja saying he was in talks with a continental business that had 8,000 servers that were not used overnight.

He said: "If it were to work out cheaper per core, then I would use those. I get no joy from running hardware: it is not interesting to us, it is just a business process."

Martin Wimmer, director of the Computer Center at the University of Regensburg in Germany, added it was now feasible to consider building a supercomputing centre in another country.

He said: "I am considering locating a computing centre where the power is less expensive."

Editorial standards