The tests, called Scale Testing for the Experiment Programme '09, threw huge amounts of data around the distributed computing project, which uses dedicated optical-fiber networks to distribute data from Cern (the European Organization for Nuclear Research) to 11 main computer centers in Europe, Asia and North America.
From these centers, data is dispatched to over 140 centers in 33 countries around the globe, where the LHC data is managed and processed. The recent grid tests, which lasted for two weeks, were completed before the beginning of July.
LHC computing-grid project leader Ian Bird said on Friday that Cern had tried to break the grid, but had not succeeded.
"People were trying to break the system by seeing how much data we could push through it, but we didn't [break it]," Bird told ZDNet UK. "The test was successful."
Data from all the experiments running at Cern — including analyses from the Atlas particle accelerator, which is linked to the LHC — were processed through the grid, according to Bird. While the amount of data expected from the LHC will be in the region of 1.3GB per second, the grid systems were bombarded with 4GB per second. "The data volume got to a much larger scale than is needed," Bird said.
Cern plans to restart the LHC in October, following an incident last September that halted the experiment. A fault, caused by imperfect welding, lead to a leak of liquid helium that caused damage when it heated and expanded.
At present, the LHC itself is not generating any data, as no experiments are being conducted, Bird said. However, the experiment was gathering data from cosmic rays hitting the experiment until testing of the machine stopped this collection. This data is due to be collected again "in a few weeks", Bird added.
Bird did not rule out a further major test of the computing grid before the LHC October restart, as some parts of the grid had been offline due to scheduled downtime.
This article was originally posted on ZDNet UK.