X
Tech

IBM announces the Z10: Is the mainframe still relevant?

IBM just announced the newest member of its mainframe family, the Z10. While the performance claims IBM makes are quite impressive, the key question that comes to mind is "are mainframes still relevant in the world of virtualized resources?
Written by Dan Kusnetzky, Contributor

IBM just announced the newest member of its mainframe family, the Z10. While the performance claims IBM makes are quite impressive, the key question that comes to mind is "are mainframes still relevant in the world of virtualized resources?" In short, the answer is a resounding yes.

Here's how IBM describes their newest baby.

IBM (NYSE: IBM) today announced the System z10 mainframe to help clients create a new enterprise data center. The system z10 is designed from the ground up to help dramatically increase data center efficiency by significantly improving performance and reducing power, cooling costs, and floor space requirements. It offers unmatched levels of security and automates the management and tracking of IT resources to respond to ever-changing business conditions.

IBM's next-generation, 64-processor mainframe, which uses Quad-Core technology, is built from the start to be shared, offering greater performance over virtualized x86 servers to support hundreds to hundreds of millions of users.

The z10 also supports a broad range of workloads. In addition to Linux, XML, Java, WebSphere and increased workloads from Service Oriented Architecture implementations, IBM is working with Sun Microsystems and Sine Nomine Associates to pilot the Open Solaris operating system on System z, demonstrating the openness and flexibility of the mainframe.

From a performance standpoint, the new z10 is designed to be up to 50% faster and offers up to 100% performance improvement for CPU intensive jobs compared to its predecessor, the z9, with up to 70% more capacity.*2 The z10 also is the equivalent of nearly 1,500 x86 servers, with up to an 85% smaller footprint, and up to 85% lower energy costs. The new z10 can consolidate x86 software licenses at up to a 30-to-1 ratio.*3

Is virtualization technology going to stunt the growth of IBM's baby?

IBM was one of the originators of or major participants in the development of almost every virtualization technology that has become so exciting today. This includes virtual access software, virtual application environments, virtual processing, virtual storage, virtual networks and both management and security of virtualized resources. It has been busy enhancing its technology for well over 30 years.

Some of this technology has been emulated by a whole host (pun intended) of other companies — on their own proprietary processor technology or on industry standard systems. Much of the current interest in this technology can be attributed to the appearance of virtualization technology on industry standard systems.

What some don't fully realize is that after over a decade of improvements, most of the virtualization technology for industry standard systems still doesn't reach the same level of performance or capability of that same type of technology on a mainframe.

When is a high-cost system really the low-cost solution?

Often organizations make the decision to move towards solutions based upon industry standard system because they believe that the hardware acquisition costs will be lower for industry standard systems. That, of course, is only one part of the equation that defines an organization's total cost structure. When I was conducting studies for IDC, it was often the case that hardware and software combined typically represented less than 25% of solution costs over three years of use. If the study stretched to five years, that factor dropped to something on the order of 20%.

Other costs, such as the staff-related costs of installation, maintenance, administration, operations, training, help-desk and the like typically represented somewhere between 50% and 70% of the total costs in those same studies. This is where a centralized solution, or something that offers centralized management and administration shines. This is also why the "expensive" mainframe configuration just might be the least costly way to address an organization's computing solutions.

This, by the way, is why there is such a rush to consolidate and "mobilize" workloads based upon industry standard systems. Just about the only way to achieve decent cost structures is to consolidate applications so that they can be managed by a small group of people on a small number of systems.

What's easier managing a couple of mainframe sysplex configurations or managing several thousand industry standard systems spread throughout the datacenter. Why the mainframe, of course. So, the folks in the industry standard systems camp have been developing loads of new technology in the hopes of making an industry standard mainframe. IBM, by the way, plays in this world too.

A Utopian view

Over time, virtualization technology is making it increasingly possible to execute just about any workload on a standard system and move that workload from system to system or replicate that workload on many systems to achieve the desired levels of performance, scalability and availability. Organizations are taking a clue from hosting companies and starting to think of all of their computing resources as a pool that can be assigned and reassigned dynamically to meet their service level objectives.

Who did this nearly 30 years ago? Why IBM and the other mainframe suppliers, of course.

Wouldn't it be nice if there was a way to make everything "play nice" with everything else?

A CPU in the ointment

Since there has been no standard system architecture, most hardware suppliers developed their own processors and other system level components. Many of the key workloads organizations use are now hosted on a variety of hardware architectures. This, of course, gets in the way of the utopian dream of using all of an organization's systems as a pool of resources.

As virtual machine technology and other virtualization technology has improved and as the underlying processor technology has seen dramatic performance improvements, it's been increasingly possible for a virtual machine software product to also emulate a different processor while running on some other processor.

The race is on

What would it be like if all of an organization's workloads, regardless of whether they run on Windows, Linux, UNIX, Mac OS, OS/390 or some other operating system could run on the same hardware architecture at the same time even if the UNIX expects to be running on on a SPARC-based machine, Windows wanted an industry standard machine and the OS/390 workload needed a mainframe? It would allow the unification of the datacenter, from a hardware perspective, that organizations have wanted for decades but, couldn't quite realize.

The ingredients are all on the table, who's going to be first to bake the solution?

The industry now has gotten to a place at which all of the ingredients are on the table: the virtualization technology, the processor performance, the need for cost reductions and the need to reduce both space and power requirements. The race to bake the perfect system solution is on between IBM and the other mainframe suppliers and the folks in the industry standard systems camp. Both want to be the first and the best unified environment.

The announcement of the IBM Z10 can clearly be seen as the company's first key move in this race. The company has been able to support several Linux distributions, AIX (IBM's UNIX) as well as all of the mainframe software they've been known for for decades. This announcement tells us that Solaris is going to join everyone in the pool (com'mon in, the processing is fine). How long will it be before Windows appears?

What do you think that the industry standard systems camp is going to do to respond to this announcement?

Editorial standards