Why is the concept of virtual machine software so sticky?

Why is the concept of virtual machine software so sticky?

Summary: As interesting as virtual client system and virtual server systems are, they're not the panacea that some in the industry present when speaking about their products. Why is it that when I speak to a Kusnetzky Group client, I almost always have to help them understand that virtualization is a much bigger topic than just virtual machine software?


As interesting as virtual client system and virtual server systems are, they're not the panacea that some in the industry present when speaking about their products. Why is it that when I speak to a Kusnetzky Group client, I almost always have to help them understand that virtualization is a much bigger topic than just virtual machine software?

If we stick to the virtual processing layer, there are at least five different technologies that either become part of or get underneath an operating system to offer some interesting, albeit mutually exclusive, benefits. Virtual machine software just must be easier for some to understand.

Other forms of virtual processing software

If we wander through the different types of virtual processing software, it soon becomes apparent that each of them has been developed to achieve a different goal. The goals are high performance, high levels of scalability, high levels of reliability, consolidation and agility. Virtual machine software is wonderful in its place and is just not the right choice when an organization's own goals mean that another choice else would be better.

  • Parallel processing monitors/Grid computing - software that allows an organization to segment applications into components or allow multiple instances of an application to all run on separate, independent machines in order to vastly extend overall performance. A side effect of this technology is an improvement in overall reliability and availability of that software. If a unit of work doesn't complete because of an outage, it is simply reassigned to another machine. The work that was being done by the first machine is lost but, in the end, is recreated.

    This is a technology that has been around for decades and so, there are many single-vendor and open source entries in this category. If one needed high levels of performance, this would be the appropriate choice, not virtual machine software. Virtual machine software isn't really designed to allow a single application to use as much processing power as possible.

  • Workload management monitors - increasing overall scalability is the goal of this type of software. It makes it possible for organizations to deploy the same application on many independent systems and then feed the next "unit" of work to the machine having the most available capacity. As with the previous category, this approach has a side effect of improving the reliability and availability of an application. If one unit of work doesn't complete properly, it is simply reassigned to another machine.

    This type of software typically is made available as part of a clustering monitor (see the next category). Once again, virtual machine software would not be the best choice for this need since it is designed to run all instances of an application on a single computer. This, of course, is not a recipe for increased scalability.

  • Clustering monitors - software that "marries" several computers into a single computing resource. There are two forms this software takes shared everything, "single system image", clustering and "shared nothing" clustering.

    Single system image clustering makes it possible for the machines to act very much like a symmetric multiprocessing computer that just happens to have been configured in multiple cabinets. Shared nothing clusters are made up of machine that basically do their own work but will pick up work that has been dropped by an outage somewhere in the cluster. In both cases, the systems are much more tightly integrated and often have to run the same exact version of the operating system on nearly identical machines. In many cases, this configuration may also be known as a "high availability" or HA cluster.

  • Operating system virtualization/partitioning - software that partitions the resources of a single computer running a single operating system so that each "partition" or "container" can be isolated from all of the others. This is a very efficient form approach to workload consolidation and agility because only a single operating system is being deployed. Memory, storage and other system resources needed to host multiple operating systems need not be acquired. Depending upon the tasks at hand, this might be a better choice than deploying virtual machine software.
  • Virtual machine software - software that either runs on the physical system or software that runs on a host operating system and allows other guest operating systems to run. In either case, the goal is to partition the resources of a single physical computer so that many different "capsules" or "virtual machines" can run.

    Each of these "virtual machines" runs its own operating system and manages the resources provided by the underlying machine or operating system. This is often the best choice when an organization wants to consolidate independent workloads that formally ran on separate machines onto a newer, much faster single machine.

New technology muddies the waters

Just to add a bit of spice, many suppliers have decided that with the addition of some sophisticated management software, it would be possible for virtual machines and physical machines to be orchestrated in ways that offered capabilities similar to those offered by clustered computers.

Multiple applications can be running and when an outage occurs or service level objectives are not being met, virtual machines can be 1) moved to another physical machine having more available resources or 2) moved from a virtual machine onto an available physical machine.

It must be noted that keeping workloads in virtual machines often imposes more overhead on the environment in exchange for a greater level of flexibility.

Confusion anyone?

Although it used to be pretty clear when virtual machine software was or wasn't the appropriate choice, the advances made over the last few years in the area of management software for virtualized resources has that choice much more difficult. When would an organization select an cluster manager over an environment based upon virtual machine software and an orchestration manager?

The technical answer revolves around the whether the organization is seeking the highest level of efficiency or the need to support workloads running on different operating environments. The virtual machine software-based approach allows applications running on different operating environments or different versions of operating environments to all play together.

With a vague reference to an old Dilbert comic strip, it is difficult for an executive who doesn't understand whether it would be better to use a "red" or a "blue" database to understand the distinction. So, they'll almost always choose the most publicized approach over another approach that might be technically better.

How does your organization see this?

Topics: Operating Systems, Software, Virtualization


Daniel Kusnetzky, a reformed software engineer and product manager, founded Kusnetzky Group LLC in 2006. He's literally written the book on virtualization and often comments on cloud computing, mobility and systems software. In his spare time, he's also the managing partner of Lux Sonus LLC, an investment firm.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • More stuff to go down...

    when the host server crashes.
    • There are significant disadvantages to deploying single machine solutions

      Thanks for pointing out the "all the eggs in one basket" issue with running everyone on one single machine.

      Dan K
    • perhaps some examples would help ...

      .. of the 5 types of virtualisation software you mention. I'm at a loss to think of any for the 1st 3.
      No 1 sounds like, e.g. a couple of CICS initiators in the same system. Sure, it's been around for decades, but I've never heard JES described as virtualisation software. Likewise,IBM 3080 series "dyadic" systems (your type 3) made 2 processors deliver the power of a big single, but it was never called virtualisation.
      My simple minded understanding of "virtual" is this; it's not really there, but the application or operating system thinks it is and behaves accordingly, albeit possibly a little slower.
      • OpenVMS fits #3

        #3 sounds like OpenVMS clustering. You can have a common system disk or independent disks for the cluster members.

        Resources are visible across the cluster and in most cases the software being run
        does not have to be cluster aware. OpenVMS clustering technology is very mature,
        stable and some clusters have up-times measured in years.

        All in all, a very elegant solution for this aspect of the discussion.
        • Thanks for mentioning it!

          As the former programs manager for VAXclusters in the U.S., I'm really happy that you brought it up! Although this technology can trace its heritage to the early 1980s, it still is the best approach for certain types of problems. Over time, Sun's Solaris Cluster evolved to offer similar characteristics. Qlusters was able to do the same thing using OpenMosix as a foundation on Linux.

          Dan K
  • My Thoughts

    We run about 150 VM's here hosted over a redundant cluster setup. If one machine fails, the others pick up the slack without interuption. But we still have quite a few issues with them. They are VERY handy to have, but they are CERTAINLY not an end all solution. Oh, we are running VMWare's software.
    • Interesting reference

      Are all of your VMs running the same operating system? If so, it might be useful to consider a different approach - operating system partitioning/virtualization.

      This approach is available for both Windows and Linux in the form of Virtuozzo from SWsoft, oops, they've renamed the company Parallels.

      Almost every supplier of Unix offers that capability in one form or another. The suppliers all use different terms for this capability including wonderfully descriptive phrases such as "local partitions", "zones" or "containers."

      Dan K
  • The OS and virtuals belong in the CPU

    Everything in computers would work great if the operating system was run from the CPU.Now the OS records from the hard drive to the RAM memory at boot up and then runs in the RAM.
    • I believe that concept was tried several times

      I dimly remember computers designed to run Forth and Pascal that largely did what you're suggesting.

      Since the operating system would be "burned" into the CPU's processor, it would be quite a task to update the software when either a flaw was discovered or when a feature enhancement was required.

      I guess they were seen as too static and too limited for large scale commercial success.

      Dan K
  • VM is a 'container'

    The virtual machine concept is easier for business types to grasp because they can picture a bucket with some kind of concoction within, that doesn't spill out onto the nice shiny hardware floor underneath. Or is shielded from being contaminated by the crud that surrounds it -- take your pick.
  • How much of virtualization is really a bandaid to Windows?

    People mentioned VMS. I used it many years ago, and never once had a problem with "cross-talk" between applications. Hell, we ran six different database systems on a 3-node cluster.

    But with Windows, you're never sure what's going to happen if you install more than one application on a system, even if they've fixed some of the "DLL Hell" situations of past versions. Most IT managers still have a firm rule: one app = one server. My boss even enforces it for our Linux-based applications since they run on the same rack servers that we used to run Windows on.

    To me, today's virtualization craze is simply a bandaid to fix a problem that most "mainframe" OS's fixed decades ago: resource management and sharing between applications.
    terry flores
    • Virtualization is much more than Virtual Machine softwre

      May I address some of your comments?

      First of all, virtual machine software; such as VMware, Xen, KVM or Hyper-V, represent just one of 5 different types of virtual processing software. Virtual processing software is just one of 6 different types of virtualization software. Much of this technology has been in use on Mainframes and single-vendor midrange systems for the better part of 30 years. So, castigating all of virtualization technology as merely patching perceived faults with Windows is a bit narrow don't you think?

      Second, there are several ways to deal with "DLL Hell" that don't involve virtual machine software at all. Application virtualization from folks such as LANDesk, Thinstall, Endeavors Technology, can deal with that issue fairly effectively.

      Dan K
      • Perhaps I should rephrase ...

        I don't disagree with your base premise that virtualization is more than just "virtual machine software". But if I rephrase the question to practical issues, maybe you can see my point.

        How many implementations of virtualization are driven by difficulties in:

        1) Getting applications to co-exist on a single Windows system?
        2) The time required to get even a single application stable on Windows servers?
        3) Moving a stable Windows environment to a larger platform?

        Virtual machine software does solve all of these problems. You don't have to get apps to co-exist at all. Once a system is stable, you don't mess with it from an OS perspective. And you can move a container/partition from one box to another without messing with drivers, settings, registry entries, et al.

        One thing that bothers me a bit about the current craze is that many IT managers think that virtualization makes a sysadmin's life easier. It's true in some aspects, but a system is still a system, whether it's virtual or physical. A fellow sysadmin had a run-in with his boss: the boss wanted him to spend part of his time on another project, saying that the admin load had been reduced because they only had 40 servers left after a recent DC consolidation. The admin tried to explain that he still had 300+ different instances to manage because they hadn't reduced the number of systems, just the number of boxes.
        terry flores
    • today's virtualization craze is simply a bandaid

      I would change the last word in that
      sentence to read "diversion" instead
      of "bandaid".

      It is an effort to distract attention from
      the fact that they can't fix existing
      software. Of course your version could still
      be relevant, in that it is a bandaid to
      cover the fact up, rather than simply hide
      it. So I suppose we are both correct,
      actually. ;-)
      Ole Man