X
Business

The mainframe's virtual renaissance

The current buzz around virtualisation may sound familiar to anyone with experience of high-end computing's origins — so what makes today's scenario so different?
Written by Sally Whittle, Contributor

When IBM first introduced the idea of a mainframe virtual machine back in the 1960s, few people would have predicted that the IT industry would have come full circle more than 40 years later. But increased interest in virtualisation and the demand for 'greener' computing could see a revival of interest in mainframe computing, according to some industry insiders.

"We are absolutely seeing interest in mainframes from clients who want to use more virtualisation," says Roy Illsley, a senior research analyst with Butler Group. "It's not an approach for everyone but, done well, it can reduce power consumption and footprint, improve reliability and provide a lot of value to the business."

Although virtualisation is most often discussed in terms of Wintel and Unix servers, the idea of consolidating many workloads onto a single machine and creating 'virtual partitions' was invented on the mainframe in 1967, says Carl Greiner, an analyst with Ovum. "This isn't a new idea by any stretch of the imagination, and virtualisation has always been done on mainframes."

The key advantage of using a mainframe for virtualisation is that it improves performance, says James Governor, a principal analyst with RedMonk. "Virtualisation technology on the mainframe is very mature, and offers availability, stability and security. For some applications and some customers, that's tremendously important."

Increasingly, companies are also coming around to the idea of mainframes offering better utilisation and efficiency, adds Illsley. "If you're looking to build a datacentre in London and you have the choice of consolidating onto 20 or 30 Wintel servers with the associated power and cooling bills — or a single mainframe that will use less power and reduce your footprint — well, it's quite compelling, for some companies," he says.

Today's mainframes can run thousands of virtual machines with very high utilisation rates, excellent security, downtime that's measured in minutes, and relatively low power consumption when measured as price per MIPS (millions of instructions per second).

Cost, flexibility and a fall from grace
So what led to the mainframe's decline almost 20 years ago, and are those issues still relevant today? Greiner argues that in the early 1990s, mainframes became very expensive for organisations to use for running new applications.

Mainframes have also suffered from a reputation for being inflexible; for example, applications cannot be easily converted to run on IBM's zOS mainframe operating system, so if the application has been written for another platfom it can often require a complete re-write to get it to work on zOS.

Many applications just aren't suited to a mainframe environment, Greiner adds. "If you're developing applications in .Net or Java, you're not going to be able to recompile them to run on a mainframe, and why would you want to?" he says. "Mainframes are for high transaction applications that require a lot of processing power, high availability or massive scalability."

Another reason for the mainframe's decline is a loss of skills in mainframe management. When organisations needed to cut IT spending in the 1990s, it was relatively easy to cut mainframe staff because the machines kept running with very little active management, says Illsley. "You ended up with these dusty, slightly forgotten machines in the back office that were just workhorses, and kept doing their job," he says.

A more open platform
To a certain extent, many of these issues have been addressed, say the experts. To begin with, IBM is now offering cheaper Linux engines at a fraction of the cost of conventional zOS engines, and the pricing model is no longer tied to MIPS, so users aren't paying for unused capacity, says Governor.

What's more, mainframe hardware and software have improved enormously in the past 20 years, so the platform is a good deal more open, says Governor. "IBM has made it a lot easier for companies to get data out of a mainframe and interrogate it using something like Websphere, as well as managing that data," he says. "Whereas once mainframes might have been seen as a bit of a roach motel for data, they're now a far more open platform."

So which companies are using mainframes in 2008? The vast majority of virtualisation on the mainframe happens when companies already have a mainframe and want to make it more valuable and efficient, says Greiner. "These are what I call mainframe-centric organisations and the chances are they are already virtualising on the mainframe, and are now turning their attention to using the platform to virtualise new applications or to enhance existing systems."

Many organisations would still consider a mainframe outside their budget, adds Greiner. "Although the price of a new mainframe has fallen, it's still probably only a handful of companies that are going out and buying the new machines specifically for this stuff," he says.

Governor agrees converts are most likely already using mainframes. He argues companies using mainframes today can be split into two camps. First, there are those who still love their mainframes, and are probably still looking to get the best performance from them — these companies...

...will be actively investigating virtualisation. Second, there are mainframe owners who haven't updated their systems in some time, and have basically left the machines to their own devices for many years, running important back-office applications. These companies are unlikely to have explored virtualisation and may never do so, says Governor.

The risk is that companies falling into the second group will try and deploy virtualisation on the mainframe without first modernising and rationalising their IT environment — and in so doing, fail to fully realise the benefits of the technology. Simply taking existing applications and moving them wholesale onto a mainframe to improve utilisation is "madness", says Governor. "It makes no sense at all, you don't get any business benefit and no cost benefit either," he says.

Reaping the potential benefits
To achieve the benefits of mainframe virtualisation, the first step is rationalisation.

This starts with a thorough audit of the IT environment, says Paul Hammond, UK managing director of consulting firm Glasshouse. It involves calculating and checking all existing systems: everything from current rack space and power consumption to which applications run on which servers should be included. "We often find companies where there are three or four finance systems being used, when it would be easier to use one or two," says Hammond. "It's important to identify those sorts of issues before doing any virtualisation."

Next, companies should carefully assess what physical servers are running what operating systems and applications, and conducting a consolidation programme. Do you need all your current applications? Are some applications more suitable for running in a mainframe environment?

As part of this analysis, Greiner advises businesses to consider carefully the costs of virtualising, including any recompiling or rewriting of applications that might be needed when running applications in a virtualised mainframe. "If you have a pile of Linux applications running on Wintel servers with 10 percent utilisation, then virtualisation and porting them to a mainframe won't be a problem, but with certain applications, such as those written in Java or .Net, it might be possible to get them running in a mainframe given the right engine and tools, but it's unlikely to be cost-effective or even particularly necessary."

So will the cost of recompiling an application for the mainframe outweigh the expected performance benefits? Would it be better to simply virtualise on a Wintel or Unix server, or perhaps invest in a new Linux application that can more easily be ported to the mainframe?

Some applications are inherently more suitable to run in a mainframe environment, and obvious advice is simply to focus on moving those applications back into a virtualised mainframe environment, says Greiner. "It makes no sense to go and compartmentalise your new SOA architecture and move it onto a mainframe, but if you have high transaction applications that demand reliability and availability, this may be a viable alternative," he says.

For example, database applications often struggle in a virtualised environment of any kind, says Illsley. "Virtualising doesn't guarantee better performance — you have to do the right virtualisation for your workload," he says.

'Fool with a tool'
It's often hard work to convince clients virtualisation is not a solution in itself, says Hammond. "There's no point in transferring all your bad practice to a new and expensive platform," he says. "We urge clients to spend a lot of time thinking about where they can consolidate applications, how their data-management policies can be improved, and work on robust service cost models that show businesses the cost and benefits of running specific services in a virtualised mainframe environment."

The reality is that even moving bad data and practices onto a mainframe will generate some benefits in performance and utilisation — but these will be far greater if you do the upfront work, or at least do this work on an ongoing basis once the migration is complete. "We call virtualisation a fool with a tool," says Illsley. "If you throw stuff onto a mainframe without thinking about it, you'll probably improve virtualisation by default, but you'll never get the performance or efficiency that justifies the cost of the platform."

Editorial standards