Inside an oil industry datacentre

Inside an oil industry datacentre

Summary: ZDNet UK has toured an oil industry 'megacentre' to see what types of demands this strenuous, computing-intensive industry places on its datacentres

SHARE:

 |  Image 3 of 8

  • Thumbnail 1
  • Thumbnail 2
  • Thumbnail 3
  • Thumbnail 4
  • Thumbnail 5
  • Thumbnail 6
  • Thumbnail 7
  • Thumbnail 8
  • Data tape boxes

    Each of these 100 or so boxes hold tapes containing data harvested by the PGS fleet. One box contains around 30 tapes, and each tape holds around 500GB of data. If all the pictured boxes contain 30 tapes, that adds up to 1.5 petabytes of raw storage capacity.

    The tapes are stored in a separate mini-datacentre on site for redundancy purposes, before their data is sent to the main processing hall for analysis.

    Photo credit: Jack Clark


    Want to know more about PGS's 'lunatic fringe' computing? Read ZDNet UK's datacentre tour diary.


  • PGS processing hall

    The processing hall consists of around 104 racks distributed across 5,737 square feet. The hardware is predominantly powerful one rack-unit (1U) servers. "Because we're HPC [high-performance computing] we mostly use 1Us from the major vendors — HP, Dell, Lenovo. The 1U is the most cost efficient, generally run, with dual processors at six cores each," Turff said.

    A wander round the datacentre mostly found Dell PowerEdge R610s, 1950s and a few Xeon ThinkServers. Storage-wise, there was an even spread of IBM System Storage D553300, FC5820s and some Dell Panasas.

    For compute jobs, PGS had considered using specialised field-programmable gate arrays (FPGAs) for certain graphical processing tasks, but the rate of development in Intel systems was too high for FPGAs to match cost effectively, Turff said.

    "[FPGAs] can do a fantastic job, but by the time you've done the development, Intel has added a bunch of cores and brought the performance cost down," he said.

    The megacentre was opened on November 2008 to replace a 15-year-old facility. "It's been remarkably trouble free, but when you design it from scratch, it's a lot easier to deal with than datacentres that are very old," Turff said. The facility was designed by Keysource, a datacentre specialist contractor.

    Photo credit: Jack Clark


    Want to know more about PGS's 'lunatic fringe' computing? Read ZDNet UK's datacentre tour diary.


  • PGS aisle

    The megacentre has a power usage effectiveness (PUE) rating of 1.148, which includes the power cost of the separate mini-datacentre used to store the tapes. The main processing hall itself has a PUE of 1.127.

    Power usage effectiveness expresses the ratio between the total facility power and the power used for the IT equipment itself. The closer a PUE rating gets to one, the greater proportion of a facility's power is being expended on its IT equipment and the lower on the supporting infrastructure.

    The PUE of 1.127 was achieved through the separation, cooling and recirculation of air within the datacentre, using a combination of adiabatic cooling, outside air and filtering to cool the air without spending much on power. The facility can be free-cooled for all but 100 hours per year.

    Each set of racks face onto one another, with the exhaust vents facing into a central corridor, sealed off from the rest of the datacentre. Inside the corridor, the hot air rises into a ceiling aisle and passes through to the cooling systems.

    Photo credit: Jack Clark


    Want to know more about PGS's 'lunatic fringe' computing? Read ZDNet UK's datacentre tour diary.


Topics: Datacentre Tour, Networking

Jack Clark

About Jack Clark

Currently a reporter for ZDNet UK, I previously worked as a technology researcher and reporter for a London-based news agency.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Related Stories

Talkback

0 comments
Log in or register to start the discussion