X
Innovation

It's official: Supercomputing is now ho-hum (thanks, cloud)

Supercomputing as a Service. Now available to all in a 'post-hardware' world.
Written by Joe McKendrick, Contributing Writer

Video: Technology migrations: More challenging, more expensive and more likely to fail

Over the years, I have had the opportunity to speak with people at some of the most impressive scientific and technical operations on the planet -- Lawrence Livermore Labs, IBM and so forth. They had platforms -- lashing together thousands upon thousands of compute nodes -- capable of doing amazing computations for amazing applications at blazing speeds.

Cloud has made that just routine day-to-day computing. Thanks to cloud, mainstream business users could tap into such immense power, even if it was just for a singular, temporary purpose, via a pay-as-you-go model. And no one exemplified this breakthrough more than Cycle Computing. Cycle has customers accessing tens of thousands of processing nodes for functions such as identifying molecule targets for drug testing, or conducting risk modeling for financial services transactions.

See: Microsoft acquires cloud-computing orchestration vendor Cycle Computing

Now, think back more than three decades ago, computing itself was the province of the few and the connected. Then, the PC came on the scene, and non-tech business users suddenly had computers they could install in their homes and offices. Microsoft was one of companies that joined in and eventually led this breakthrough.

So, it's interesting to see Microsoft, the company that helped bring personal computing to the masses, acquire Cycle, the company that helped bring supercomputing to the masses. Perhaps it's only natural and inevitable that these two came together. If you follow Microsoft's playbook, which has grown them into the $85-billion-a-year company they are today, you will see that they only enter markets that are ripening into mass markets. They tend to stay away from early stage, high-margin stuff.

So, supercomputing is now just part of the ubiquitous range of services now offered by a rapidly consolidating handful of power players -- Amazon Web Services, Google, Microsoft and IBM. It's supercomputing for all, thanks to cloud. "With the availability of cloud-based HPC, the ability to count nodes, petabytes, or bandwidth is less relevant because anyone can get access to the size that they need," Cycle's Rick Friedman pointed out in recent post. "This move to a 'post-hardware' view of HPC enables people to think beyond the limitations of what they are running on and focus on problem they are trying to solve."

I caught up with Josh Simon, VP of cloud services at Atlantic.Net, who provided some interesting observations on the implications on what's happening here. "This acquisition is consistent with overall consolidation taking place in the industry, especially management of larger and more complex workloads," he says. "We are seeing these types of workloads becoming more of a business requirement than a rarity in today's shifting computing landscape."

From Microsooft's perspective the acqiusition makes sense as it helps "up their IQ in an area that isn't their current strong suit. This shows that Microsoft is serious about taking on AWS, Google, Atlantic.Net and others that have been operating in this space for a very long time."

Simon sees the move as increasing -- not decreasing -- competition. "Competition drives innovation and traditionally lowers prices which are all a big win for consumers that are not only continuing to move to cloud computing, but also increasing their workloads in dramatic fashion."

Cycle's Friedman also pointed to this growing democratization of supercomputing. "Cloud is bringing [high performance computing, HPC] into a broader world. We see a broader use of the techniques and technologies of HPC to help people use computation to predict rather than simply report. Historically, most analytic computation has been focused on validating and reporting what we already know: capturing transactions, reporting on activities, validating designs, checking our math. Classic examples include all of the accounting type workloads, human resource systems, inventory management, etc. With the availability of HPC-like (large compute, network, and storage capacity) environments easily accessible by anyone, more and more groups are using data and simulation to predict future events, outcomes, or reactions."

Such supercomputing power also will play a role in design, Friedman says, "exploring multiple options to develop new approaches and not just validate completed designs." With supercomputing for all, "everyone can have their own cluster for the size they need when they need it. These are the people that are truly driving HPC forward in the coming years. These are the people that don't have computer science, electrical engineering, or other classic technical degrees; they are people from the design or marketing or sales or production side of the world that are using these technologies to approach their work a new way. And these people don't care about the hardware."

More coverage:

Editorial standards