Cern is evaluating cloud-computing technologies to help process and manage data from the world's largest particle accelerator.
The European nuclear research organisation plans to merge cloud and grid-computing capabilities to make data processing for the Large Hadron Collider (LHC) more efficient and environmentally friendly, Cern deputy IT department head David Foster told ZDNet UK on Tuesday.
"We're looking at cloud computing technologies and how we can use them," Foster said. "The idea is that science clouds and grid computing combined will be a powerful tool, combined with high-performance networking."
Foster ruled out using commercial cloud services because of costs and the amount of data processing necessary for projects such as the LHC. Cern has estimated that the LHC will capture approximately 10 petabytes of data per year, while for a single LHC experiment the data throughput will be one terabit per second.
In the late 1990s Cern chose distributed computing to deal with the sheer amount of data from the experiment. Cern now acts as the data collection point, or Tier 0, for LHC data. Various Tier 1 datacentres around the world, including one in Taipei, are linked via fibre-optic cable to Cern. These centres reconstruct the data and turn it into simulations, while Tier 2 sites allow individual users to analyse it.
Cern has a history of pioneering computing. Tim Berners-Lee invented the world wide web while working at the organisation, whose laboratory straddles the Franco-Swiss border near Geneva. Foster said Cern wished to continue this tradition by looking at ways of linking private science clouds, green datacentre technology, grid computing, and high-performance networking.