IBM Symphony orchestrating technical and big data applications

IBM Symphony orchestrating technical and big data applications

Summary: Developing low latency orchestration tools that can accelerate technical and big data applications requires vast knowledge of cluster and grid computing dynamics. IBM just released the results of several benchmarks that demonstrates mastery of that arcane knowledge.

SHARE:
TOPICS: Big Data, IBM
0

Since suppliers aren't in the business of giving complex, expensive computing solutions away, they try to demonstrate what a somewhat similar workload can be made to do on a specific configuration.  The benchmarks IBM cited are designed largely to show how certain types of cluster or grid-based computing solutions will perform.

Will a customer see the same or similar performance running their own applications is a key question. The answer, of course is, "it depends." Very similar workloads that are running on very similar system configurations that have been set up by people having very similar expertise to the IBM folks are likely to see very similar performance. Workloads that are quite different, that are running on configurations that are quite different and were configured by people having quite different levels of expertise are likely to perform differently.

What attracted my attention was the enormous performance improvements offered by inserting IBM Platform Symphony or LSF into an environment when both the software being tested and the system configurations were identical.  While I wasn't totally surprised by the results as I've been following Platform Computing for nearly two decades, the results were impressive.

The point IBM is trying to make that using an intelligent orchestration tool that is designed to manage the efforts of thousands of systems can make a big different in performance, efficiency and reduced costs appears to be well supported by the benchmark results. I have to wonder, however, if similar results could be achieved by using other orchestration software, such as the well-known beowulf project. Since that type of configuration wasn't tested, we don't know the answer to that question.

If your organization is involved in technical computing, high performance computing or Big Data, it would be wise to look into what IBM did to learn more about how to improve both the performance and efficiency of your operation. Furthermore, you are likely to discover that you can accomplish the same things using a much smaller system configuration when a low latency orchestration tool like Symphony is optimizing resource usage.

Topics: Big Data, IBM

About

Daniel Kusnetzky, a reformed software engineer and product manager, founded Kusnetzky Group LLC in 2006. He's literally written the book on virtualization and often comments on cloud computing, mobility and systems software. In his spare time, he's also the managing partner of Lux Sonus LLC, an investment firm.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

0 comments
Log in or register to start the discussion