Is power-hungry supercomputing OK now?

Is power-hungry supercomputing OK now?

Summary: We may be planning for a 1,000-fold increase in compute power in the next decade, but what about the extra power consumption, asks Andrew Jones

SHARE:
TOPICS: Emerging Tech
0

The world's most powerful supercomputers can require many megawatts of electricity to operate. But what if the next factor of 1,000-fold performance increase needs 100MW, asks Andrew Jones.

I was recently interviewed about deploying the world's largest supercomputers for The Exascale Report, a magazine focused on the evolution of supercomputing and the targeted 1,000-fold increase in compute power in the next 10 years.

Inevitably the interview covered the huge costs and especially energy. It got me thinking. What follows is not yet my opinion — but it might start an interesting discussion.

There are a range of estimates for the likely power consumption of the first exaflops supercomputers, which are expected at some point between 2018 and 2020. But probably the most accepted estimate is 120MW, as set out in the Darpa Exascale Study edited by Peter Kogge (PDF).

At this figure, the supercomputing community panics and says it is far too much — we must get it down to between 20MW and 60MW, depending who you ask — and we worry even that is too much. But is it?

Supercomputers as scientific instruments
First an aside. In my opinion, the largest supercomputers at any time, including the first exaflops, should not be thought of as computers. They are strategic scientific instruments that happen to be built from computer technology. Their usage patterns and scientific impact are closer to major research facilities such as Cern, Iter, or Hubble.

Back to the question of power consumption. I looked at other major scientific facilities for comparison. So, some quick web searching shows that Cern idles at 35MW and peaks at 180MW when all is running. It consumes about 1,000GW h per year — or the equivalent of about 120MW steady state.

Read this

Comment: Are we taking supercomputing code seriously?

The supercomputing programs behind so much science and research are written by people who are not software pros, says Andrew Jones

Read more

In terms of construction costs, one estimate of the cost to design and deploy the first exaflops supercomputer facility is between $1bn (£650m) and $2bn, with subsequent procurements of early exaflops of the order of a few $100m each. Well, from the above links, the LHC at Cern is a $9bn project, Iter has a $5bn build budget and a further $5bn operational budget over 35 years — and so on with Hubble and Hugo.

So our power requirements are not that outrageous compared with other major scientific facilities. Neither are our overall costs. My question is: are we making such a poor case for supercomputers that we get scared by 20MW to 60MW and a few $100m for the biggest?

Major impact on disparate sciences
One of supercomputing's greatest strengths is its ability to have a major impact on research across a huge range of disparate sciences from climate to medicine to aerodynamics to cosmology. Is this also one of its weaknesses — that any statement of its value is always a list of many sciences, rather than one simple message as it can be for a major facility owned by a single discipline?

Another clue may lie in the extreme pace at which supercomputing technology evolves — and thus we need a new supercomputer every...

Topic: Emerging Tech

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

0 comments
Log in or register to start the discussion