IBM announces the Z10: Is the mainframe still relevant?

IBM announces the Z10: Is the mainframe still relevant?

Summary: IBM just announced the newest member of its mainframe family, the Z10. While the performance claims IBM makes are quite impressive, the key question that comes to mind is "are mainframes still relevant in the world of virtualized resources?


IBM just announced the newest member of its mainframe family, the Z10. While the performance claims IBM makes are quite impressive, the key question that comes to mind is "are mainframes still relevant in the world of virtualized resources?" In short, the answer is a resounding yes.

Here's how IBM describes their newest baby.

IBM (NYSE: IBM) today announced the System z10 mainframe to help clients create a new enterprise data center. The system z10 is designed from the ground up to help dramatically increase data center efficiency by significantly improving performance and reducing power, cooling costs, and floor space requirements. It offers unmatched levels of security and automates the management and tracking of IT resources to respond to ever-changing business conditions.

IBM's next-generation, 64-processor mainframe, which uses Quad-Core technology, is built from the start to be shared, offering greater performance over virtualized x86 servers to support hundreds to hundreds of millions of users.

The z10 also supports a broad range of workloads. In addition to Linux, XML, Java, WebSphere and increased workloads from Service Oriented Architecture implementations, IBM is working with Sun Microsystems and Sine Nomine Associates to pilot the Open Solaris operating system on System z, demonstrating the openness and flexibility of the mainframe.

From a performance standpoint, the new z10 is designed to be up to 50% faster and offers up to 100% performance improvement for CPU intensive jobs compared to its predecessor, the z9, with up to 70% more capacity.*2 The z10 also is the equivalent of nearly 1,500 x86 servers, with up to an 85% smaller footprint, and up to 85% lower energy costs. The new z10 can consolidate x86 software licenses at up to a 30-to-1 ratio.*3

Is virtualization technology going to stunt the growth of IBM's baby?

IBM was one of the originators of or major participants in the development of almost every virtualization technology that has become so exciting today. This includes virtual access software, virtual application environments, virtual processing, virtual storage, virtual networks and both management and security of virtualized resources. It has been busy enhancing its technology for well over 30 years.

Some of this technology has been emulated by a whole host (pun intended) of other companies — on their own proprietary processor technology or on industry standard systems. Much of the current interest in this technology can be attributed to the appearance of virtualization technology on industry standard systems.

What some don't fully realize is that after over a decade of improvements, most of the virtualization technology for industry standard systems still doesn't reach the same level of performance or capability of that same type of technology on a mainframe.

When is a high-cost system really the low-cost solution?

Often organizations make the decision to move towards solutions based upon industry standard system because they believe that the hardware acquisition costs will be lower for industry standard systems. That, of course, is only one part of the equation that defines an organization's total cost structure. When I was conducting studies for IDC, it was often the case that hardware and software combined typically represented less than 25% of solution costs over three years of use. If the study stretched to five years, that factor dropped to something on the order of 20%.

Other costs, such as the staff-related costs of installation, maintenance, administration, operations, training, help-desk and the like typically represented somewhere between 50% and 70% of the total costs in those same studies. This is where a centralized solution, or something that offers centralized management and administration shines. This is also why the "expensive" mainframe configuration just might be the least costly way to address an organization's computing solutions.

This, by the way, is why there is such a rush to consolidate and "mobilize" workloads based upon industry standard systems. Just about the only way to achieve decent cost structures is to consolidate applications so that they can be managed by a small group of people on a small number of systems.

What's easier managing a couple of mainframe sysplex configurations or managing several thousand industry standard systems spread throughout the datacenter. Why the mainframe, of course. So, the folks in the industry standard systems camp have been developing loads of new technology in the hopes of making an industry standard mainframe. IBM, by the way, plays in this world too.

A Utopian view

Over time, virtualization technology is making it increasingly possible to execute just about any workload on a standard system and move that workload from system to system or replicate that workload on many systems to achieve the desired levels of performance, scalability and availability. Organizations are taking a clue from hosting companies and starting to think of all of their computing resources as a pool that can be assigned and reassigned dynamically to meet their service level objectives.

Who did this nearly 30 years ago? Why IBM and the other mainframe suppliers, of course.

Wouldn't it be nice if there was a way to make everything "play nice" with everything else?

A CPU in the ointment

Since there has been no standard system architecture, most hardware suppliers developed their own processors and other system level components. Many of the key workloads organizations use are now hosted on a variety of hardware architectures. This, of course, gets in the way of the utopian dream of using all of an organization's systems as a pool of resources.

As virtual machine technology and other virtualization technology has improved and as the underlying processor technology has seen dramatic performance improvements, it's been increasingly possible for a virtual machine software product to also emulate a different processor while running on some other processor.

The race is on

What would it be like if all of an organization's workloads, regardless of whether they run on Windows, Linux, UNIX, Mac OS, OS/390 or some other operating system could run on the same hardware architecture at the same time even if the UNIX expects to be running on on a SPARC-based machine, Windows wanted an industry standard machine and the OS/390 workload needed a mainframe? It would allow the unification of the datacenter, from a hardware perspective, that organizations have wanted for decades but, couldn't quite realize.

The ingredients are all on the table, who's going to be first to bake the solution?

The industry now has gotten to a place at which all of the ingredients are on the table: the virtualization technology, the processor performance, the need for cost reductions and the need to reduce both space and power requirements. The race to bake the perfect system solution is on between IBM and the other mainframe suppliers and the folks in the industry standard systems camp. Both want to be the first and the best unified environment.

The announcement of the IBM Z10 can clearly be seen as the company's first key move in this race. The company has been able to support several Linux distributions, AIX (IBM's UNIX) as well as all of the mainframe software they've been known for for decades. This announcement tells us that Solaris is going to join everyone in the pool (com'mon in, the processing is fine). How long will it be before Windows appears?

What do you think that the industry standard systems camp is going to do to respond to this announcement?

Topics: Servers, Hardware, IBM, Virtualization


Daniel Kusnetzky, a reformed software engineer and product manager, founded Kusnetzky Group LLC in 2006. He's literally written the book on virtualization and often comments on cloud computing, mobility and systems software. In his spare time, he's also the managing partner of Lux Sonus LLC, an investment firm.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • z10 - Change of Ideology in Enterprises

    I think the z10 is going to be the future winner, especially in the virtualization arena. The problem will be getting large corporations to "buy" in to the purchase. In the old days of server sprawl, a company could just purchase a new server when needed. The idea of planning resources for growth wasn't as important as it is for virtualization. As an example, a recent large purchase proposal I was involved with for virtualization caused a VP at this company to ask - "why should I purchase hardware that may not be used day one, or even day 90?" That type of near-sighted vision will kill virtualization if executives are not educated on why you will purchase resources in advance of virtual server deployment.
    • Short lifespan left...

      The mainframe still has life...but 10 years from now? I would not be betting on it. Why spend 6 or 7 figures on big iron when you can grow incrementally? Many of the features formerly unique to the mainframe have "trickled down" into "lower end" servers, which continue to improve in stability and have great advantages over the mainframe in terms of usability and flexibility now. IBM is making strides to bring some of those abilities to the mainframe, as noted by the Linux, JVM, and other technologies that now run on it...but in a sense, that has been a force-fit, whereas lower end machines continually get more powerful. I don't see how at some point in the future clusters of smaller machines will not overtake the single big machine as the "computer" of choice. You need only check out the infrastructure of large operations such as Google for a clue to where things are going.
      • networking equipment is power hungry...

        so, unless you're google, making custom networking gear for your data center, you're not going for the server farms, as it's more efficient to get something that is designed from the ground up for efficiency.

        yes, mainframes got shoe-horned with other stuff that's powerful... but those low-end servers weren't originally made for all those technologies, either. the tech came after both servers and mainframes had been around. They're both moving closer together, however, the big iron DOES have the power advantage in its footprint, so the connection of the bits involves less power consumption, and less cooling.

        server farms won't compare that way. they involve all the sprawl of the machines, all the additional networking gear, etc, etc, etc...

        I think that only time will tell, but mainframes are relevant, and they've got a fighting chance at doing well.
      • Short lifespan left...that's an old story..

        ""lower end" servers, which continue to improve in stability and have great advantages over the mainframe in terms of usability and flexibility now."

        "Lower end" describes it. While servers may have improved in stability our server farm still can't touch the stability, reliability, or availability that our mainframe has. And that flexibility is just a marketing myth from what I've seen.

        And while incremental purchases may look appealing in the short run. In the long run that big iron will be cheaper to purchase and operate than a couple thousand servers (I use that figure since it is the projected number of servers we were quoted we need to replace our mainframe.)

        One little example, When the government changed the date daylight savings time started. Our distributed folks went into panic mode. Software had to be updated on all the servers, a 25 person call center was set up to handle any problems on the PC,s.
        On our mainframe we changed one parm, the offset from GMT from 6 to 5 and IPL'd (that's reboot for you server types).

        Cost to convert the servers and provide the extra support close to $30,000.00. For the mainframe 2, 3 bucks maybe. I could bore you and me with more examples, but the bottom line is those clusters of smaller machines does not necessarily translate into a smaller expense.
        • Maybe when the big iron can run Windows and .NET...

          Maybe when the big iron can run Windows and .NET we'll see the "lower end" servers go away. As for the time switch, if your organization was using ITIL and you had a good CMDB (which we had to design from scratch since there is no good shrinkwrap CMDB) you would have deployed a package using PatchLink or SMS and called it a day. Our CMDB runs on IIS and SQL and talks to dozens of data sources including IIS, WebSphere, Oracle, MSSQL, Ingres, MySQL, and industry standard tools such as HPOV, CiscoWorks, EMC ECC, PatchLink, Tivoli, Active Directory. We've also built in the ability to do release management in the same system and we can deploy to IIS and WebSphere equally well. The main advantage that the mainframe had over distributed systems was a management process. With ITIL that advantage goes away and suddenly blades cost less and you can actually find someone to write the software that runs on them because they support Windows and .NET instead of Linux and J2EE.
          • Re: Maybe when big iron can run Windows and .NET

            Please No! This is NOT enterprise stuff. Big iron has up-times measured in years or

            Reasons may be:

            0) Oracle databases (and in a *NIX shop Ingres) are OK, but enterprise people tend
            to use DB2. It may permissible to use MS SQL Server if you do not require 99.99+%
            uptime - OS Service patches anyone?

            1) Big iron is built and designed by proper engineers, with years of experience. The
            main design criteria are reliability and fitness-for-purpose. Other equipment is
            usually designed to a price with an expected service life of <6 years.

            2) MS Certified whatevers, and most PFYs with manufacturer's certification
            qualifications, are usually kept well away from important stuff until they can show
            the scar-tissue of experience.

            There may be some hope for MS IT types, Windows 2008 Server PowerShell cmdlets
            are a nod in the direction of requiring skill. Unfortunately these are not going to
            impress the grizzled veteran until they have been in use for, say, 10 years and then
            SP2'd into Windows 2017 Server.

            For a long time, the support metrics for the various 'enterprise' platforms have
            been in the order of:
            1 IT person per thousand users for a main frame
            1 IT person per 200+ users for *NIX (*NIX users tend to know what they are doing)
            1 IT person per 20-50 Windows users.

            A Web application, can be measured at <100 users as far as support is concerned.
    • RE: IBM announces the Z10: Is the mainframe still relevant?

      This seems correspondingly cool absolutely before <a href="">youtube converter</a> exclude they improvement <a href="">youtube to mp3 converter</a> along with <a href="">youtube mp3 converter</a>
      youtube to mp3 converter
  • death of the mainframe

    back in the early 90s when the PC has just started to become available and affordable ($1300 for a 3mb 386 33MHz 80 gig) the death of the mainframe was reported as being imminent.Some 20 years later IBM is still in business and has just announced a new frame. If hardware & software represents just 20% of the TCO over 5 years it only makes sense to concentrate efforts on the area(s) where the biggest savings can be had. Centralization is one route. Unfortunately one of the major problems of centralization is bringing the would-be empire builders into the fold.
  • Where are the benchmarks to prove performance???

    Impressive performance? Huh??

    OK so IBM is great at marketing this mainframe but
    where are the industry standard benchmarks to prove the
    performance capabilities.
  • Oh God!

    I thought it said IBM announces the Z100!

    I hated that all-in-one beast of an early desktop.
  • I've heard it many times recently: if you have more than 2000 users

    ...who need access to the same information, then a mainframe will always be the best, fastest, most effective, most cost-effective solution.

    Web servers will never run on mainframes, they're not needed - but there are so many organisations where instant access to massive amounts of number crunching for multiple users is needed, and any number of PCs are not up to the task. I hope the multiple-PC camp realise this...
    • ...there is more than 2000 users...

      the title of my comment was truncated! the above is the rest of it. More than 2000 users who need access to the same information, mainframe wins every time.
  • RE: IBM announces the Z10: Is the mainframe still relevant?

    It's all going to come down to energy costs. If the government implements the Cap and Trade, most large Data Centers with mass amounts of servers will have to find a way to decrease the power pulls from the PDU's. The mainframe uses two receptacles pulling about 13A. Having windows servers utilizing an abundance of floor space and rack power is going to be less efficient than choosing a mainframe solution with limited power requirements. To add to this, the cooling requirements for a mainframe are significantly less than that used to cool these high density ovens in our Data Centers. We currently have 2 z9 systems and the cooling requirementsare limited. The 19 bladecenter H servers require overhead direct input cooling costing more in energy costs than if we'd stuck with Linux on the z9's.