The immortal mainframe and what it means for the future of application development

The immortal mainframe and what it means for the future of application development

Summary: How can we go about delivering modern services on old school technology?

SHARE:
mainframe-thumb

When we visit a website we have no way of knowing what's under that new and shiny JQuery or Bootstrap UI.

It might be a node.js system, or Ruby on Rails, or ASP.NET MVC or any of a dozen other modern web development frameworks. But that's all just presentation and marshalling: the real heavy lifting is carried out by back end services that may not have changed since the 1970s.

The mainframe isn't dead. It's still there, powering IT systems that sit there, taking in data and responding to queries. The applications that run on those undead mainframes haven't changed for years, locked in place by rules and regulations that are enshrined in law.

Ossified by time and forgotten by the outside world, they're systems that power many critical financial processes — processes that not only won't change, but that can't change.

We could rebuild them from scratch, using new technologies, but as always there's a catch. How do we migrate data and keep those systems running, and how do we budget for and build new systems? Often it's just easier to leave them running, as we're unlikely to know just what critical business processes depend on those systems (and often the only way to find out is to turn them off, and wait to see who complains).

There's good money to be made putting modern APIs on those ancient systems - and it's cheaper (and safer) than building those systems from scratch.

In many cases the skills used to build the original systems are long gone, and it's near impossible to find junior developers willing to spend their lives at the COBOLface, working with those near-immortal mainframes.

It's harder yet to find developers who want to learn hierarchical ADABAS, or to delve into the depths of MUMPS and PICK. While you could write new code to create those new APIs, you might just end up faking a terminal, sending queries as key sequences, translating ASCII and EBCDIC responses into objects and XML.

With those new APIs, no matter how jury-rigged, those systems aren't going away. They're going to remain there as long as the data they contain is necessary — often until the last customer record is marked as "deceased", and when the regulatory environment they operate in still assumes paper records, even longer. It's data that powers actuarial systems, providing the history that the insurance industry needs to fuel its ever more complex models.

Of course those aren't the only legacy systems out there — and some of them are a lot newer than the ancient mainframes. If a mainframe was too big for your business in the 80s and 90s you got a minicomputer, usually an AS/400.

That often went along with a client-server app model and with business-critical apps written in languages like Visual Basic, and in 4GLs like Powerbuilder or Forte. Many of those systems are still running, but the languages and platforms used to build them are long gone — along with the developers who built the systems.

We're left with near-immortal legacy systems, packed full of critical data, that can't be turned off, that are too expensive to migrate to new technologies, and nearly impossible to interface to the systems we want to use today.

Now imagine a large insurance company. It's possible to use various session-based virtual desktop tools to construct a workflow around the existing tools, and to script them for call center workers. It's even possible to cobble together CORBA interfaces to create APIs that can be used by more modern Java or .NET applications - but we're now adding multiple layers of abstraction and indirection.

That means more latency, and reliance on systems that can't scale like modern applications. Suddenly a busy web site is trying to funnel thousands of queries through an ancient Windows NT box to an even older minicomputer. Is it any wonder that users start complaining about performance?

That's only for one service. Modern applications are complex workflows and processes built across a mix of new and old systems. With the response of one system depending on inputs from another, latencies and performance bottlenecks can compound — affecting the user experience significantly. It's a familiar recipe for IT disaster, and one that we regularly forget.

That means that you're going to need to consider just how a conglomerate service needs to operate. If you need to pre-qualify users, don't make it part of the registration process — only capture the information needed when it's needed, and then attach it to the user profile.

If you're going to be querying multiple systems, consider making queries in parallel, pre-fetching as much information as possible, and working with cached data — and above all let your users know that they're receiving data asynchronously, using JSON callbacks and progress indicators as a workflow delivers results.

We've been building large scale ecommerce systems for less than two decades, so it's not surprising that people are still making mistakes, especially when trying to hook the latest and greatest web tech to databases that were designed in the 1970s.

It's not easy to bring those two worlds together, but it is possible. You just need to be aware that it's not going to be a smooth process unless you take one thing into consideration: the user.

Solving a complex technology problem is no longer enough. What's needed is a switch from that bottom-up application development model to a top-down approach, focusing on user experience first and foremost.

That's why many developers now talk about SLE: Service Level Expectations. In our user-centric world, it's more important to think about the resulting user experience, than the uptimes of the servers and services that power that experience.

As you build a service, you need to keep that user experience in mind, no matter if you're using the latest technology, or working with an ancient mainframe. If a user isn't getting the response they think they should be getting, then you're not meeting your SLE — no matter how well your servers might be performing.

Related stories

Topics: Enterprise Software, Software Development, Web development

Simon Bisson

About Simon Bisson

Simon Bisson is a freelance technology journalist. He specialises in architecture and enterprise IT. He ran one of the UK's first national ISPs and moved to writing around the time of the collapse of the first dotcom boom. He still writes code.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

14 comments
Log in or register to join the discussion
  • IMHO, it is about the lazyness to migrate legacy system.

    I am YET to find a business legacy system that is too complex to rebuild from scratch. Usually is filled with spaguetti code but, most of the time, is simple to understand. For example, most COBOL code simple save/read from to a database and so some business logic and nothing more.
    Most biggest projects are not more than 1000 functions.
    It is possible to create small team of 4 person and migrate 1 function x day (migrate then test, then fix and integrate to the front-end) so a team of 24 developers could migrate a big project in a single year. However, the problems is the cost, it involves to add 24 (or more) developers for a whole year and unless the CEO is close to the develeopers then it is pretty unwilling that he will spend in 24 developers just because a migration of system that "it just works".
    However, it is not strange to find that to keeps a legacy system costs a fortune, not only to keeps a floor filled with legacy programmers doing nothing but fixing the system that "works".
    magallanes
    • Partially true

      The problem is that safe way to migrate from old systems does not look attractive financially. The first step has to be a simple re-design and rewrite of the old system that would reproduce the same services. It means that from beancounters point of view this is worthless. Somebody is recreating the same old system + new bugs. However, there are things that cannot be measured right away. Rewrite would mean more people familiar with data and processes plus ideal testing environment when old and new systems can run side by side. After first migration step is done there is no limit to what else can be done. New people can work in parts of the system since there is no need to dig dude out of grave to look at some cobol code.
      paul2011
      • Not mentioning the cost of running duplicate systems

        during the development and testing...

        and the old system is likely still fully supported, and cheaper than the "new" system actually is. More reliable and everything.

        Also note the "legacy programmers" are not legacy - they know the system, how it works, and have been updating it as requirements change.
        jessepollard
      • Justification for the bean counters...

        ...only needs quantification of risk. If you can calculate the dollar value of the risk of NOT migrating/updating/downtime etc, you can usually build a pretty solid business case.
        jaykayess
        • Hardware and power

          don't forget the risk of outdated hardware and not being able to get replacement parts or media.

          Then there is the power saving issue, moving from a mainframe to a bunch of blades can save hundreds or thousands of dollars a month in electricity costs. We have one customer left on a DEC Alpha mini and the rest have all migrated to "PC" servers. That one Alpha draws more power than the rest of the server infrasctructure combined (about a dozen servers).
          wright_is
    • Legacy integration is all about the economics

      Nicely put 'a floor of programmers doing nothing but fixing the system that "works"'

      One of the most compelling scenarios for upgrade/efficient integration is when one company acquires another. Even given practically unlimited IT budgets some of the world's largest financial institutions are struggling to combine complex "estates" which are full of legacy systems.

      For many companies, it is not that they don't want to eliminate the legacy systems, it is more that the risk of system failure is far greater than the cost of ongoing support, so they integrate, adapt and enhance what they have already.

      There are also the times when the business needs a solution quickly and can't wait for IT to spend years replacing a system or even providing an API to access data locked away on some ancient system.

      I'm working for a startup which is focused on trying to change the economics and efficiency of legacy system integration using a new software platform developed for that purpose. Our argument would be that by leveraging the legacy system it is possible to deliver business value with minimal risk. Perhaps one tactic might be to skin with an API first and then re-build the system behind the facade.
      Alex Redston
  • Bitter clingers!

    Word from the underlings here is that we have many clients, including some in the insurance business (notoriously slow to change), who have hung onto their mainframes for dear life...until recently. The understanding is finally coming that lipstick on a pig still has a pig underneath. The innovation powered by Wintel and server-based computing has reached the level where the cost of the technical debt from leaving systems on the mainframe is something they are now willing to absorb. The realization that skills shortage is for a reason: there is something better, and who wants to work on "not better"?

    My rep might kill me for saying it, but even a Linux server is better than a crusty old mainframe! It's about stepping stones. A move to some server environment is a move closer towards the ultimate: Windows-based clients and servers. Once they get there, it's harmony. Developers are plentiful. Ease of use is unparalleled. Those who really want to make it worthwhile will check into Azure, and upgrade their sponsorship in order to always have access to the latest bits on their desktops!
    Techboy_z
    • Depends entirely on the mainframe.

      The z systems from IBM are still considered a mainframe. And a single such mainframe can host about 1200 VMs.

      And identify the Wintel server that has 5 9s uptime over a year.

      There aren't any.

      Mainframes with such uptime are fairly common.

      Even Azure doesn't have such uptimes.

      Then there is the problem that windows servers aren't recommended to have more than one supported service... So you now need 10 times the number of servers.
      jessepollard
    • Mike COX ....!!!

      I'm SOOOOO glad you are back!!!
      linux4u
  • non-mainframe servers ...

    It has been my experience that non-mainframe servers cannot handle the workload volume (they do not scale very well) nor provide the up-time that mainframes do. If they could then mainframes would be gone.
    jew123@...
    • Not sure

      What your experience has been or is? I can assure you VM-based Linux or Wintel servers scale quite well and are extremely competitive to what a mainframe is capable of AND BEYOND. Indeed I am in charge of a mission critical system that has both mainframe/COBOL/VSAM components (yes that's real old folks) and also has Wintel VM components. Since the mainframe portion is essentially "batch heavy", it has times during the night that it is simply NOT available, meaning my customers can't even inquiry on their latest information. The VM Wintel environment is based on real-time web services architecture and is designed to be available 24x7 and of course it is. We have to use our VM Wintel environment to "front end" the old legacy mainframe and cache data so our customers can still query their data while the old mainframe runs its batch. The system is so old, the batch takes about 3 1/2 hours to run. If this were re-architected on a new Wintel or Linux-based platform, every function (almost every function) would be real-time, hence that is ZERO outage time for our customers and they would get the latest and greatest data - which today they sometimes get and sometimes they don't. In my case the mainframe is an impedement to delivering the latest and greatest information to my customers whenever they ask for it. Eventually it will have to be replaced. Simply too old to support the modern day access of our existing customer base.
      BruinB88
  • Glad to see another article supporting the mainframe!

    Despite being considered a legacy system, the mainframe still remains integral for supporting the majority of today's mission-critical applications. CIOs and businesses alike have invested many years and significant portions of their budgets simply to maintain their mainframe systems. As mentioned in this article, mainframe maintenance can put a significant dent in the IT wallet. For many organizations, the obvious next step to solve these challenges is through modernization.

    At Micro Focus we encourage legacy system modernization, rather than simply "keeping the lights on." In our experience, modernization presents a myriad of benefits for the business — it can significantly reduce IT costs, free up budget for other projects and increase overall productivity. Like you said, it's not easy to bringing the old and new together, but it can be done.

    Although these legacy systems don’t seem to be reaching extinction anytime soon, with the proper approach to application modernization, companies can keep their outdated mainframe system alive! Through modernization - systems will work smarter, more efficiently and fully function in today's modern IT world.

    -- Paul Averna, Vice President, Enterprise Solutions, Micro Focus
    Paul Averna
    • I thought this blog was to share valuable information

      We can get sales pitches from anywhere, why do you sales guys come into these blogs, selling your wares? Great I'm glad to see Micro Focus is still around, but I really don't think commenting about a sales opportunity when discussing merits of either keeping or shelving a mainframe-based system is appropriate here.
      BruinB88
  • SOA Helps...

    Although most organizations are at least partially down this path, I believe the first step is to create a Service Oriented Architecture. Of course this is not an overnight task and the article is right to cite that this does add more layers. With proper caching, performance issues can be kept in check and once you realize everything is talking through the services, you can begin to migrate various functionality more easily. Rather than completely rewrite system X, inspect what it's doing. It likely crosses several domains of which some or perhaps most don't really belong there. You'll probably discover they are mostly duplicated in another system. Of course the duplication is generally not perfect and here's where the real work begins. You need to figure out which system SHOULD own the domain and make sure it includes all necessary business functionality so it can be removed from the other systems and then published back to them. Once the duplicate domains are eliminated, you may find the functionality of what remains much less daunting to recreate/move/modernize.
    robradina@...