X
Business

The immortal mainframe and what it means for the future of application development

How can we go about delivering modern services on old school technology?
Written by Simon Bisson, Contributor
mainframe-thumb

When we visit a website we have no way of knowing what's under that new and shiny JQuery or Bootstrap UI.

It might be a node.js system, or Ruby on Rails, or ASP.NET MVC or any of a dozen other modern web development frameworks. But that's all just presentation and marshalling: the real heavy lifting is carried out by back end services that may not have changed since the 1970s.

The mainframe isn't dead. It's still there, powering IT systems that sit there, taking in data and responding to queries. The applications that run on those undead mainframes haven't changed for years, locked in place by rules and regulations that are enshrined in law.

Ossified by time and forgotten by the outside world, they're systems that power many critical financial processes — processes that not only won't change, but that can't change.

We could rebuild them from scratch, using new technologies, but as always there's a catch. How do we migrate data and keep those systems running, and how do we budget for and build new systems? Often it's just easier to leave them running, as we're unlikely to know just what critical business processes depend on those systems (and often the only way to find out is to turn them off, and wait to see who complains).

There's good money to be made putting modern APIs on those ancient systems - and it's cheaper (and safer) than building those systems from scratch.

In many cases the skills used to build the original systems are long gone, and it's near impossible to find junior developers willing to spend their lives at the COBOLface, working with those near-immortal mainframes.

It's harder yet to find developers who want to learn hierarchical ADABAS, or to delve into the depths of MUMPS and PICK. While you could write new code to create those new APIs, you might just end up faking a terminal, sending queries as key sequences, translating ASCII and EBCDIC responses into objects and XML.

With those new APIs, no matter how jury-rigged, those systems aren't going away. They're going to remain there as long as the data they contain is necessary — often until the last customer record is marked as "deceased", and when the regulatory environment they operate in still assumes paper records, even longer. It's data that powers actuarial systems, providing the history that the insurance industry needs to fuel its ever more complex models.

Of course those aren't the only legacy systems out there — and some of them are a lot newer than the ancient mainframes. If a mainframe was too big for your business in the 80s and 90s you got a minicomputer, usually an AS/400.

That often went along with a client-server app model and with business-critical apps written in languages like Visual Basic, and in 4GLs like Powerbuilder or Forte. Many of those systems are still running, but the languages and platforms used to build them are long gone — along with the developers who built the systems.

We're left with near-immortal legacy systems, packed full of critical data, that can't be turned off, that are too expensive to migrate to new technologies, and nearly impossible to interface to the systems we want to use today.

Now imagine a large insurance company. It's possible to use various session-based virtual desktop tools to construct a workflow around the existing tools, and to script them for call center workers. It's even possible to cobble together CORBA interfaces to create APIs that can be used by more modern Java or .NET applications - but we're now adding multiple layers of abstraction and indirection.

That means more latency, and reliance on systems that can't scale like modern applications. Suddenly a busy web site is trying to funnel thousands of queries through an ancient Windows NT box to an even older minicomputer. Is it any wonder that users start complaining about performance?

That's only for one service. Modern applications are complex workflows and processes built across a mix of new and old systems. With the response of one system depending on inputs from another, latencies and performance bottlenecks can compound — affecting the user experience significantly. It's a familiar recipe for IT disaster, and one that we regularly forget.

That means that you're going to need to consider just how a conglomerate service needs to operate. If you need to pre-qualify users, don't make it part of the registration process — only capture the information needed when it's needed, and then attach it to the user profile.

If you're going to be querying multiple systems, consider making queries in parallel, pre-fetching as much information as possible, and working with cached data — and above all let your users know that they're receiving data asynchronously, using JSON callbacks and progress indicators as a workflow delivers results.

We've been building large scale ecommerce systems for less than two decades, so it's not surprising that people are still making mistakes, especially when trying to hook the latest and greatest web tech to databases that were designed in the 1970s.

It's not easy to bring those two worlds together, but it is possible. You just need to be aware that it's not going to be a smooth process unless you take one thing into consideration: the user.

Solving a complex technology problem is no longer enough. What's needed is a switch from that bottom-up application development model to a top-down approach, focusing on user experience first and foremost.

That's why many developers now talk about SLE: Service Level Expectations. In our user-centric world, it's more important to think about the resulting user experience, than the uptimes of the servers and services that power that experience.

As you build a service, you need to keep that user experience in mind, no matter if you're using the latest technology, or working with an ancient mainframe. If a user isn't getting the response they think they should be getting, then you're not meeting your SLE — no matter how well your servers might be performing.

Related stories

Editorial standards