X
Business

High performance computing (HPC): A look at what's next with TACC's Dan Stanzione

Using supercomputers for simulations has moved well beyond initial core industries into mainstream enterprises. Here's a look at sustainability, consumption models and industries to watch.
Written by Larry Dignan, Contributor
hikari.jpg

Hikari aims to be TACC's greenest project and runs on DC power primarily.

High performance computing (HPC) is increasingly going mainstream as the industries using supercomputers to crunch data is multiplying at a rapid clip.

We caught up with Dan Stanzione, executive director of the Texas Advanced Computer Center (TACC) at the University of Texas, Austin, to talk about the state of HPC, sustainability, consumption models and use cases.

Here are the key points:

Sustainability and DC data centers: One of TACC's most interesting projects is Hikari, a project funded by the Japanese government using NTT facilities and HPE's Apollo hardware. The general idea is to eliminate the AC to DC power conversions in the data center to improve sustainability.

"A lot of renewables like solar and batteries generate DC natively," said Stanzione. "And we're wasting money on the conversions. Solar power to computers go through at least four AC to DC conversions."

Other efficiency gains are expected to come from racks, renewable power and HPE's Apollo 8000 systems, which use 380-volt DC current and are water cooled. "While we are working on sustainability in HPC, the goal is to make the demonstration apply for the enterprise data center," said Stanzione. The goal is to save about 15 percent on power consumption.

At the scale of cloud data centers, that 15 percent would equate to a lot of money. Stanzione added that it's too early to nail down specific returns on Hikari.

How long before DC makes it to the enterprise? First, more server vendors would have to offer DC systems and settle on a standard. Stanzione expects standards groups to iron out the DC power issue for the market and efforts like Tesla's aim to use battery power in houses will also take DC mainstream.

"When new data centers are built out it would make sense to start with DC," he said. "With legacy data centers there would be a cost to convert. It's much easier to start with DC to begin with."

Time to solution as the main HPC driver: I noted that multiple vendors are talking about HPC in the enterprise and illustrating what the systems can do instead of raw horsepower. Stanzione noted that time to solution has always been an important metric and the fundamental driver behind HPC. Nevertheless, the concept of time to solution has gone mainstream and that's an important development.

"What has shifted is that it's not easy to measure peak performance anymore," said Stanzione. "With hardware and high end chips it's getting harder to measure peak performance. What you're really trying to get to is the capability. If I buy this machine how much faster can I solve a problem?"

In addition, the more business uses HPC the more likely it is that time to solution will matter more than horsepower.

How is the HPC market changing? Stanzione said that the biggest change is what industries are using HPC. Historically, TACC worked with oil and gas, aerospace and automotive companies that had to run big simulations.

"More recently, there's a notion that simulation isn't just for big physical analytics," he said. "Life sciences, health and pharma are all mining patient data and genomic data. There's also corporate analytics departments using HPC for customer data."

"We used to use HPC for physical sciences. Today we're talking about huge data analysis problems with Facebook and the banks. There are a lot of things to model," said Stanzione.

How will HPC be consumed? Stanzione said there will be a few ways to use HPC. For problems that simply require more throughput and compute capability, the cloud will be a good option. "Big collections of small problems fit in with the cloud," said Stanzione. "We'll see a lot of that."

But for bigger projects, you'll see specialized providers like TACC and national labs focused on an issue.

And there will be a lot of companies and entities where HPC systems will be built in-house. "There are some significant concerns with proprietary data sets that companies don't want to get out of the door," said Stanzione. "You can talk about security, but multibillion dollar decisions will keep HPC adoption in-house."

The mainstreaming of HPC: Stanzione said supercomputing is mainstream if you just focus on power. "HPC is the tip of the spear and the things we learn over time get translated into broader industry," he said. "Every new product is referred to as a supercomputer in a box. The performance trickles down."

Mainstreaming the techniques behind HPC is a bit trickier. Parallel processing is critical to HPC and has been for decades. Now that approach is being used by programmers everywhere.

"Things happen differently at the cutting edge," said Stanzione. "For instance, how are we going to use quantum computing and other techniques to improve performance? That may not be mainstream for a while."

ZDNet Monday Morning Opener

The Monday Morning Opener is our opening salvo for the week in tech. Since we run a global site, this editorial publishes on Monday at 8:00am AEST in Sydney, Australia, which is 6:00pm Eastern Time on Sunday in the US. It is written by a member of ZDNet's global editorial board, which is comprised of our lead editors across Asia, Australia, Europe, and the US.

Previously on Monday Morning Opener:

Editorial standards