Layers of Virtualization at Work
There are many layers of technology that virtualize some portion of a computing environment. Let’s look at each of them in turn.
Access virtualization is the use of both hardware and software technology that allows nearly any device to access any application without either having to know too much about the other. The application sees a device it’s used to working with. The device sees an application it knows how to display. In some cases, special-purpose hardware is used on each side of the network connection to increase performance, allow many users to share a single client system, or allow a single individual to see multiple displays.
Today this function has been enhanced allowing a complete virtual desktop system to execute on a server that is housed in the data center and users are able to work as if they had a dedicated personal computer at their command.
This approach was first demonstrated in the 1960s when IBM and mainframe suppliers developed special-purpose communications controllers allowing interactive and batch applications to work without them having to be programmed to know how each and every access device actually worked. This approach is still at the heart of the operations of mainframes and midrange systems. It has been enhanced and optimized over the decades it has been in use.
It appeared in the industry standard X86 systems world in the mid 1980s when Citrix brought that technology to market. Later Microsoft and many others have introduced products offering similar capabilities.
Application virtualization is software technology allowing applications to run on many different operating systems and hardware platforms. This usually means that the application has been written to use an application framework and the application framework handles the details of operating system, network and storage interaction for the application.
It also means that applications running on the same system that are not based upon this application framework do not get the benefits of application virtualization. More advanced forms of this technology offer the ability to restart an application in case of a failure, start another instance of an application if the application is not meeting service-level objectives, or provide workload balancing among multiple instances of an application to achieve high levels of scalability. Some really sophisticated approaches to application virtualization can do this magical feat without requiring that the application be re-architected or rewritten using a special application framework.
This concept can be traced back to IBM's CICS (Customer Information Control System). Applications that were developed to work with CICS gained a number of benefits including high performance queue management, reliable communications and didn't need to be modified each time a new type of terminal device was introduced. IBM has continued to grow and enhance the capabilities of its application frameworks since the introduction of this technology in the 1960s.
Application virtualization technology has grown to include application workload management and application encapsulation. An encapsulated application may be able to execute in a previously incompatible environment, say a new version of the operating environment, or be easily delivered to a remote system over a high-speed network.
Processing virtualization is hardware and software technology that hides physical system configuration from operating system services, operating systems, or applications. This type of virtualization technology can make one system appear to be many or many individual systems to be harnessed together and made to appear as a single computing resource. Processing virtualization makes it possible to achieve goals ranging from raw performance, high levels of scalability, reliability/availability, agility, or consolidation of multiple environments into a single system.
Optimal support of processing virtualization includes the requirement that processors, memory controllers, system communication busses, and controllers for both storage and networking be enhanced for the demands virtual environments make on the underlying physical system.
While it is possible to support these functions with many different types of processors, processors that have instructions necessary to the creation, movement and destruction of virtual system can optimize the performance of virtual environments.
This type of technology first appeared in the 1960s in mainframes and re-emerged again in midrange systems in the 1970s and 1980s. This type of technology was introduced for industry standard systems in the late 1990s.
Even though processing virtualization is only now becoming commonplace in the world of industry standard systems, it has become a standard part of nearly all mainframe and midrange systems installations.
IBM's parallel sysplex, for example, appears to be a single machine to workloads. In reality, a sysplex can be configured with up to 32 systems. Workloads live in a highly extensible environment. A failure of a system is managed by the hardware and the operating system without workloads having to deal with the issue. Furthermore, the careful balance of hardware, software, networking and storage technology has made it possible for these System z configurations to be run utilization levels well above 90% in real world applications.
The technology available on industry standard systems is improving over time, but still has not achieved the levels of integration and performance found in today's mainframes.
Hardware and software technology that presents a view of the network that differs from the physical view. A personal computer, for example, may be allowed to “see” only systems it is allowed to access. Another common use is making multiple net- work links appear to be a single link. This approach makes it possible for the link to present higher levels of performance and reliability.
Another very important function of network virtualization is enhanced security. It is possible to create private virtual networks that share a single network link. Only authorized systems and applications are allowed to communicate and then they are only allowed to communicate on the virtual network to which they are assigned.
This approach was first used in mainframe-based networks and since has become a standard used in all communications.
Hardware and software technology that hides where storage systems are and what type of device is actually storing applications and data. This technology allows many systems to share the same storage devices without knowing that others are also accessing them. This technology also makes it possible to take a snapshot of a live system so that it can be backed up without hindering online or transactional applications.
Accelerating and optimizing storage requires an in-depth understanding both of the capabilities of the underlying storage devices and how workloads are making use of programs and data. It is then possible to move storage objects to the device and location offering the proper balance of power consumption, heat production performance, reliability and cost. An additional benefit is that it is possible to use lower cost, lower performance storage and still have workloads experience excellent performance.
While this capability is just emerging in the world of industry standard systems, it has been available in the world of mainframe and midrange systems for a number of years.
Management for virtual environments
Management for virtual environments is far more complex than managing a single physical machine. Management software must have the capabilities to monitor and manage physical systems, storage devices, networking devices, operating systems, database management systems, application frameworks and the applications themselves.
To make optimal use of resources, it must be possible to orchestrate the use of all resources to that only necessary resources are used for each workload. This requires the cooperation of hardware and software developers.