X
Business

Virtualization: hardware and software working together in harmony

IT decision-makers must consider all of the reasonable alternatives available to them. Mainframes and midrange systems can’t be merely thought of as a legacy of the past.
Written by Dan Kusnetzky, Contributor

We've been watching the industry, as a whole, move from a focus on physical system performance, reliability, availability, cost, power consumption or some other attribute, to speaking about how applications, application components and complete workloads can be moved into a more virtual environment. This is a game changer on many levels, in particular in improving IT efficiency, which reduces data center energy usage, which is good for both the organization's pocketbook and the environment.

What is often held in the shadows and seldom discussed is that a virtualized environment is a carefully constructed and managed illusion that requires a well architected balance of capabilities of system processors, memory, internal communications busses, storage devices, networking equipment, as well as a complex layer of software technologies.

Virtualization is a relative newcomer to the industry standard X86 world. While it is growing by leaps and bounds, it is important to acknowledge that nearly all of the concepts we're seeing emerging in the X86 world were developed, tested and, some would say, perfected elsewhere. I'm referring to IBM's System z (commonly known as "the mainframe"). These systems and the supported software continue to drive industry innovation and are often the most efficient and cost-effective way to address large scale workloads.

What Is Virtualization?

Virtualization is a way to abstract applications and their underlying components away from the hardware supporting them and present a logical or virtual view of these resources. This logical view may be strikingly different from the physical view.

Virtualization can create the artificial view that many computers are a single computing resource or what appears to be a single system is really many individual computers working together. It can make a single large storage resource appear to be many smaller ones or make many smaller storage devices appear to be a single device.

This virtual view is constructed using excess processing power, memory, storage, or network bandwidth. For this magic to work smoothly, efficiently and reliability, it is necessary for system architects and developers to find the correct balance of hardware and software functions. If the magic incantation is done just so, high levels of manageability, reliability, availability, and performance are the results. Other important results of virtualization are minimizing requirements for floor space, power or the production of heat. This can mean smaller, faster, more power efficient, more "green" computing for everyone.

Having the correct balance of hardware can be seen at many layers of virtualization technology that is in use today. Let's examine the layers of virtualization technology and how having the right mix of hardware and software can transform virtualization from a computer science project to a technology that can be safely and simply used in a production environment.

Model of Virtualization

Analysts often find that it is much easier to understand a complex environment if they build a reference model. The Kusnetzky Group Model of virtualization is an example.

Reference models must be comprehensive and the segments must be mutually exclusive to be really useful.

Over time, most of the functions that computers perform have in some way benefited from virtualization. It is important to note that some products incorporate features that straddle one or more layers of the model. Those products are typically assigned to the layer describing their most commonly used functions.

As one would expect, industry and technological changes require that the model be revisited regularly to determine if previous categories should be merged into a single new category or deleted.

Goals of Virtualization

Organizations are often seeking different things when using virtualization technology. An organization’s virtualization goals might include the following:

    • Allowing any network-enabled device to access any network-accessible application over any network, even if that application was never designed to work with that type of device
    • Isolation of one workload or application from another to enhance security or manageability of the environment
    • Isolation of an application from the operating system, allowing an application to continue to function even though it was designed for a different version of the operating system
    • Isolation of an application from the operating system, allowing an application to function on a foreign operating system
    • Increasing the number of people that an application can support, by allowing multiple instances to run on different machines simultaneously
    • Decreasing the time it takes for an application to run, by segmenting either the data or the application itself and spreading the work over many systems
    • Optimizing the use of a single system, allowing it to work harder and more intelligently (that is, reducing the amount of time the processor sits idle)
    • Increasing the reliability or availability of an application or workload through redundancy (if any single component fails, this virtualization technology either moves the application to a surviving system or restarts a function on a surviving system)

The organization’s choice of virtualization technology is dependent upon what it’s trying to accomplish. While there are typically many ways to accomplish these goals, some goals direct organizations’ decision-makers to select specific tools.

Layers of Virtualization at Work

There are many layers of technology that virtualize some portion of a computing environment. Let’s look at each of them in turn.

Access virtualization

Access virtualization is the use of both hardware and software technology that allows nearly any device to access any application without either having to know too much about the other. The application sees a device it’s used to working with. The device sees an application it knows how to display. In some cases, special-purpose hardware is used on each side of the network connection to increase performance, allow many users to share a single client system, or allow a single individual to see multiple displays.

Today this function has been enhanced allowing a complete virtual desktop system to execute on a server that is housed in the data center and users are able to work as if they had a dedicated personal computer at their command.

This approach was first demonstrated in the 1960s when IBM and mainframe suppliers developed special-purpose communications controllers allowing interactive and batch applications to work without them having to be programmed to know how each and every access device actually worked. This approach is still at the heart of the operations of mainframes and midrange systems. It has been enhanced and optimized over the decades it has been in use.

It appeared in the industry standard X86 systems world in the mid 1980s when Citrix brought that technology to market. Later Microsoft and many others have introduced products offering similar capabilities.

Application virtualization

Application virtualization is software technology allowing applications to run on many different operating systems and hardware platforms. This usually means that the application has been written to use an application framework and the application framework handles the details of operating system, network and storage interaction for the application.

It also means that applications running on the same system that are not based upon this application framework do not get the benefits of application virtualization. More advanced forms of this technology offer the ability to restart an application in case of a failure, start another instance of an application if the application is not meeting service-level objectives, or provide workload balancing among multiple instances of an application to achieve high levels of scalability. Some really sophisticated approaches to application virtualization can do this magical feat without requiring that the application be re-architected or rewritten using a special application framework.

This concept can be traced back to IBM's CICS (Customer Information Control System). Applications that were developed to work with CICS gained a number of benefits including high performance queue management, reliable communications and didn't need to be modified each time a new type of terminal device was introduced. IBM has continued to grow and enhance the capabilities of its application frameworks since the introduction of this technology in the 1960s.

Application virtualization technology has grown to include application workload management and application encapsulation. An encapsulated application may be able to execute in a previously incompatible environment, say a new version of the operating environment, or be easily delivered to a remote system over a high-speed network.

Processing virtualization

Processing virtualization is hardware and software technology that hides physical system configuration from operating system services, operating systems, or applications. This type of virtualization technology can make one system appear to be many or many individual systems to be harnessed together and made to appear as a single computing resource. Processing virtualization makes it possible to achieve goals ranging from raw performance, high levels of scalability, reliability/availability, agility, or consolidation of multiple environments into a single system.

Optimal support of processing virtualization includes the requirement that processors, memory controllers, system communication busses, and controllers for both storage and networking be enhanced for the demands virtual environments make on the underlying physical system.

While it is possible to support these functions with many different types of processors, processors that have instructions necessary to the creation, movement and destruction of virtual system can optimize the performance of virtual environments.

This type of technology first appeared in the 1960s in mainframes and re-emerged again in midrange systems in the 1970s and 1980s. This type of technology was introduced for industry standard systems in the late 1990s.

Even though processing virtualization is only now becoming commonplace in the world of industry standard systems, it has become a standard part of nearly all mainframe and midrange systems installations.

IBM's parallel sysplex, for example, appears to be a single machine to workloads. In reality, a sysplex can be configured with up to 32 systems. Workloads live in a highly extensible environment. A failure of a system is managed by the hardware and the operating system without workloads having to deal with the issue. Furthermore, the careful balance of hardware, software, networking and storage technology has made it possible for these System z configurations to be run utilization levels well above 90% in real world applications.

The technology available on industry standard systems is improving over time, but still has not achieved the levels of integration and performance found in today's mainframes.

Network virtualization

Hardware and software technology that presents a view of the network that differs from the physical view. A personal computer, for example, may be allowed to “see” only systems it is allowed to access. Another common use is making multiple net- work links appear to be a single link. This approach makes it possible for the link to present higher levels of performance and reliability.

Another very important function of network virtualization is enhanced security. It is possible to create private virtual networks that share a single network link. Only authorized systems and applications are allowed to communicate and then they are only allowed to communicate on the virtual network to which they are assigned.

This approach was first used in mainframe-based networks and since has become a standard used in all communications.

Storage virtualization

Hardware and software technology that hides where storage systems are and what type of device is actually storing applications and data. This technology allows many systems to share the same storage devices without knowing that others are also accessing them. This technology also makes it possible to take a snapshot of a live system so that it can be backed up without hindering online or transactional applications.

Accelerating and optimizing storage requires an in-depth understanding both of the capabilities of the underlying storage devices and how workloads are making use of programs and data. It is then possible to move storage objects to the device and location offering the proper balance of power consumption, heat production performance, reliability and cost. An additional benefit is that it is possible to use lower cost, lower performance storage and still have workloads experience excellent performance.

While this capability is just emerging in the world of industry standard systems, it has been available in the world of mainframe and midrange systems for a number of years.

Management for virtual environments

Management for virtual environments is far more complex than managing a single physical machine. Management software must have the capabilities to monitor and manage physical systems, storage devices, networking devices, operating systems, database management systems, application frameworks and the applications themselves.

To make optimal use of resources, it must be possible to orchestrate the use of all resources to that only necessary resources are used for each workload. This requires the cooperation of hardware and software developers.

Summary

If we stop and take stock of an organization's requirements for optimal resource utilization; high levels of both reliability and availability; and still achieve the goal of simplicity and cost reduction, it is clear that IT decision-makers must consider all of the reasonable alternatives available to them.

Mainframes and midrange systems from suppliers, such as IBM, can’t be merely thought of as a legacy of the past. They have not only been the source of many of today's innovations, they continue to be seen as holding advantages over other platforms – and these advanced, mature virtualization capabilities allow users to drive higher utilization. The energy and cost savings associated with fewer wasted compute cycles shouldn’t be ignored. Suppliers, such as IBM, are not resting on the history of their successes; they are pushing technology forward at a rapid pace. Some of these innovations will likely later move into the world of industry standard systems as well.

Regardless of whether a workload is designed for traditional batch execution, highly interactive client-server, Web-based or even cloud computing, these environments should be considered.

Editorial standards