X
Innovation

Why do people deploy processing virtualization?

Processing virtualization, in the Kusnetzky Group model of virtualization technology, is technology that operates at or below the operating system level that creates a logical or virtual environment that differs from the actual physical environment in some way.There are a number of benefits to working in a logical or virtual environment - if the proper selection of technology is made.
Written by Dan Kusnetzky, Contributor

Processing virtualization, in the Kusnetzky Group model of virtualization technology, is technology that operates at or below the operating system level that creates a logical or virtual environment that differs from the actual physical environment in some way.

There are a number of benefits to working in a logical or virtual environment - if the proper selection of technology is made. As with other areas of technology, if the wrong technology is selected, the results are very likely to be less than expected or desired.

This happens often enough that an IT "technical term" has emerged to describe the results of this type of decision.

A technical term we all know

I know, I know, you're wondering what that term is. Have you ever heard someone exclaim "this sucks" when really meaning "the performance characteristics of this solution are less than optimal or expected by management."

Processing software is much more that virtual machine software

Processing virtualization software covers a broad view ranging from software than links many machines and makes them appear as a single computing resource to software that allows the resources of a single machine to be presented as several individual machines.

Different Goals, different choices

  • Higher performance - Grid computing or parallel processing monitors make it possible for a single application or the data needed by a single application to be spread over a number of machines. Since many machines are working on the same task, that task can be performed much faster. A side effect of this technology is an increase in application reliability as well. If a unit of work doesn't complete on one machine, it will simply be reasigned to another machine. In the end, the task is completed.
  • Increaed scalability - Workload management software that is part of many clustering software products makes it possible for the same application to be running on many machines. As work comes in from the network, it is sent to the machine having the most available resources. Each instance of the application will run no faster than the speed of the machine it currently resides upon. This approach also has a side effect of increased application reliability. If a single application instance fails, work continues on other machines. It is possible for the results of a single transaction to be lost unless other steps are taken.
  • Increased reliability or availability - HA/Failover software that is part of many clustering software products makes it possible for an application to survive the loss of its current host or some other critical resource. The clustering software waits for "heartbeat" or "keep alive" messages to come in from all of the machines that are cluster members. If one of the machines no longer sends out these messages, it is declared dead (even if it is still running) and is removed from the cluster. The work that is assigned to this machine is started up on another machine in the cluster and starts working after the last checkpoint. This process, called a "state transition" can take quite some time. If higher levels of reliability or availability are needed, an organization would deploy a fault tolerant machine rather than attempting to use clustering software.
  • Workload consolidation - Virtual machine software or Operating system virtualization software make it possible for the resources of a single machine to be partitioned into what appears to be multiple smaller machines. Virtual machine software would be selected if there is a diverse mix of operating environments needed. Operating system partitioning would be selected if all of the applications are running on a single operating environment, such as Linux, Unix or Windows. Since many workloads are sharing the same physical machine, this approach, taken all by itself, would not be suggested if the goals are higher performance, higher levels of scalability or even application reliability.
  • Application agility - There are several different approaches that would make it possible for the environment to move applications and other computing resources from machine to machine to achieve a service level objective or to comply with a set of policies. In the past, this was the realm of single system image clustering products. We're seeing a new approach to this old problem being broadly adopted. That is the use of an either virtual machine software or operating system virtualization/partitioning software combined with an orchestration manager and virtual machine or operating system partion migration software. This new approach to an old problem reduces or eliminates the need for agile applications to be architected for a specific clustering manager.
  • Unified management domain - Organizations, in order to reduce the costs of administration and operations, may chose to use a clustering software product or orchestration software combined with migration software to make a multi-machine environment more easily managable.

Wrong choices lead to problems

I believe it was Abraham Maslow who said "He that is good with a hammer tends to think everything is a nail." Many IT decision-makers create issues for themselves by selecting a technology or a product and only then looking at what needs to be accomplished. When this happens we see things such as virtual machine software combined with migration software applied to a task better suited to either a grid processing monitor or a clustering software product.

While it is true that suppliers are improving their software, in the end no amount of improvements will overcome the problems caused by an inappropriate choice of processing virtualization software.

Have you had personal experiences with this sort of mistake?

Editorial standards