The cloud: High performance computing's best hope?

The cloud: High performance computing's best hope?

Summary: Most organizations' configurations, says Cycle Computing's Jason Stowe, are "too small when you need it most, too large every other time."

The cloud High performance computing best hope

At the recent ISC Cloud' 13 conference, Jason Stowe, CEO of Cycle Computing, presented an interesting assessment of the growing needs many companies have for on-demand high performance computing.

Cycle Computing's main points

Cycle Computing believes that the easy availability of high performance computing — that is, the ability to address the largest and most complicated computing task by harnessing together the power of hundreds, or perhaps, thousands, of computers — will improve the capabilities of many companies that previously were not able to use high performance computing.

Organizations that wish to use this approach need a large budget for hardware, software, power, networking and storage, as well as high levels of expertise on hand — unless they turn to offerings from cloud service providers.

Which choice is the best isn't as simple as pressing the "cloud computing" button all of the time.

Organizations of all types and in all markets are facing reduced budgets and still must meet accelerated time lines to gather data, analyze it, and turn it into useful and actionable information. This cycle must be repeated faster and faster to meet organizational goals.

The key challenge is that this cycle often requires a huge number of systems, fast networks, and the expertise to set up an HPC configuration. Since this installation may only be needed a small portion of the time, the organization's HPC resources would sit idle and unproductive a good deal of the time. In Stowe's words, today's configurations are "too small when you need it most, too large every other time." Stowe pointed out that the industry is seeing this challenge when dealing with engineering, science, and even business analytics.

In the end, Stowe was happy to point out that his company, Cycle Computing, offers the tools and services to address this problem.

Snapshot Analysis

After reviewing his presentation, I tend to agree that it is clear that organizations of all sizes and in all markets have needs for computing that are intense at some times and nearly non-existent at others. Purchasing their own equipment, licensing their own software and setting up their own data centers may not be the correct choice. A better choice might be to use cloud services to perform the work.

On the other hand, the industry has seen organizations re-purpose their workstations, departmental servers and even business unit service in off hours to achieve their goals without having to over-provision the datacenter, over-buy software licenses, and let expensive resources sit idle when not needed.

Which choice is the best isn't as simple as pressing the "cloud computing" button all of the time. A thoughtful analysis of the organization's needs, its available resources, and staff expertise might suggest a different approach.

What is your organization doing to address its need for high performance computing, business analytics and research?

Topic: Cloud


Daniel Kusnetzky, a reformed software engineer and product manager, founded Kusnetzky Group LLC in 2006. He's literally written the book on virtualization and often comments on cloud computing, mobility and systems software. In his spare time, he's also the managing partner of Lux Sonus LLC, an investment firm.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • build our own datacenter

    We have chosen to build our own datacenter and to bet on private cloud.
    Seriously our data are too sensitive to choose public cloud computing.
  • So let me get this straight...

    Jason, the founder of a company who rents server farms for processing large amounts of data, says that most companies don't have server farms for processing large amounts of data, and if they do, they're not always processing large amounts of data. Therefore, renting Jason's server farms for processing large amounts of data is the solution for companies who only occasionally need server farms for processing large amounts of data.

    Despite this being IBM's business model for the past decade, and the way computing was done from the 50's to the late 80's, this is somehow news?

    • ... like never before.

      Hi Joey, good comment on the story. It reminds me of a song by the Bare Naked Ladies: It’s All Been Done ( And if you take a look back at computing history, there are many examples of great ideas, which take time to fully become realized. In the world of high performance computing (HPC), for example, highly scalable computers gained in popularity in the early 1980s when massive parallel processing (MPP) machines aimed to take advantage of low cost processors connected together to run large scale scientific applications. But the vision of parallel processing didn’t really take off until the very late 1990’s and early in the 2000’s with the wide adoption of Linux and x86 (Intel-based) computer components, which significantly reduced the cost, while delivering the power necessary for supercomputing. This is also the case today, with the realization of cloud computing as it is becoming very accessible, because of recent technological advancements in software, lower costs, and application availability. So while Yes – I will agree with your statement that it’s not an entirely new idea – it’s time has come. The type of engineering and science that is being performed this way is on a scale like never before.