IBM System x and PureSystem X6 families: It's all about balance

IBM System x and PureSystem X6 families: It's all about balance

Summary: Different workloads exercise different system components in different ways. The wrong mix of capabilities might make a specific system configuration perform badly even though it might have very powerful processors or extremely large memory capacity.

SHARE:
TOPICS: Data Centers
1
IBM System x and PureSystem X6 families about balance

Recently IBM briefed analysts on its upcoming System X and PureSystem, X6 product families designed to address the changing needs of enterprises.

The key take away from this briefing for me was balance. As workloads and system technology change, different compromises have to be made in the design of systems to balance processing power; memory speed and capacity; network bandwidth and media; and storage type, channel, and performance.

Different workloads use systems in different ways

What's really important to remember is that different workloads exercise different system components in different ways. The wrong mix of capabilities might make a specific system configuration perform badly even though it might have very powerful processors or extremely large memory capacity.

The secret sauce system designer must apply an educated sense of what each type of workload does and what system resources will be used up first. The rest of the system only needs to be good enough to support that component up to its maximum.

Adding any more of those resources will simply add cost to the system and the customer will see little or no additional benefit. The example would be when an automobile manufacturer puts a 15,000 HP jet engine in a vehicle and yet, the owner can't go any faster than the legal speed limit. Most, if not all, of that horsepower is going to be wasted.

How do different types of workloads use system resources

Computationally intense jobs, such as technical computing or modeling, tend to extensively exercise processors, cache memory and system memory. So, system configurations should include the most powerful processors, a very large system cache and enough high-speed memory for the task at hand.

Database-intense workloads, on the other hand, tend to stress system cache, memory, storage and network subsystems more than processing even though processing power is still important. These systems might trade fast I/O channels, flash cache and very high speed disks. Another approach for these workloads is to simply replace all disks with more expensive, but much faster DRAM or Flash storage.  These workloads tend not to use as much processing power as would a technical computing or modeling workload would use.

Virtualized systems -- that is, systems that support many different workloads -- tend to use system components differently than when those same workloads are hosted on physical systems. Typically the differences are that the memory and storage subsystems are stressed far more heavily than if the system is supporting a smaller number of host-based tasks.

Host systems support a large number of workloads that are encapsulated into virtual machine or into operating system partitions (operating system virtualization and partitioning) tend to stress all of the internal components more than if the same physical host was executing any one of those workloads exclusively.

Cloud computing environments often stress the network, memory and processing subsystems more because much of the user interactions and access to storage are coming over a lower performance wide area network rather than a high performance local network.

Balancing these different requirements is what system development and system architectures are all about. IBM, having many decades of experience, often offers systems that are very effectively balanced.

Although all of the details of these systems will be announced later, the briefing disclosed that System x and PureSystem X6 systems will use the newest, highest performance processors, large caches, fast memory, flash cache to accelerate storage and an overall configuration designed to offer high levels of manageability, reliability and availability.

Topic: Data Centers

About

Daniel Kusnetzky, a reformed software engineer and product manager, founded Kusnetzky Group LLC in 2006. He's literally written the book on virtualization and often comments on cloud computing, mobility and systems software. In his spare time, he's also the managing partner of Lux Sonus LLC, an investment firm.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

1 comment
Log in or register to join the discussion
  • Different workloads use systems in different ways

    Exactly! We will see if they've gotten it right... clearly some past systems were not properly balanced AND more importantly, were improperly marketed up and down the IBM chain.

    In the past their biggest most expensive systems lagged in technology, understandably, but for sales purposes those were aggressively pushed where choosing the less expensive box with better technology would have resulted in a better overall performing scale out implementation. In general this is the better approach except for a few specific use cases where the larger box is appropriate.

    Not that other vendors didn't make the same mistakes.
    greywolf7