CA and VM Stall

Recently I got an update from CA about a study Oliver Wyman recently released.  CA's Andi Mann (a great guy by the way) discussed the results with me.
Written by Dan Kusnetzky, Contributor

Recently I got an update from CA about a study Oliver Wyman recently released.  CA's Andi Mann (a great guy by the way) discussed the results with me.  I guess CA intends to present these results as a way to support a catchy phrase "VM Stall." Although Andi is really a very persuasive person, I wasn't entirely persuaded to be a supporter of this catch phrase used in this way.  CA, by the way, really is on to something if one is considering what are the main inhibitors to the adoption of virtual machine technology.

In my view, the findings point out some of the factors that are inhibiting the adoption of virtual machine technology, not a stall of some kind. Stall is a technical term meaning something else to an IT practitioner. Usually it refers to a resource that is needed by a process or workload being unavailable. So, of course, the process stalls while waiting for that resource to become available once again.

If that is the reader's definition of the word stall, the reader would believe that the study was focused on what happens when virtual machines are run in physical environment that is not sufficiently configured for their operation - so virtual machines might stall waiting for processor time, memory or storage. It is clear that if virtual machines were waiting for critical resources, the workloads they contain would not be running at peak efficiency. This, of course, isn't what the study was examining at all.

Let's look at a summary of the findings to discover what is really being examined.

Here is a quick summary of CA's presentation of the findings

Note: this excerpt uses CA's terminology even though I think that it is somewhat misleading.

1. The Reality of VM Stall
The issue of VM stall was highlighted by several findings:

  • Around one-third (34 percent) of organizations have only deployed virtualization for five to 20 percent of servers – typically the ‘low-hanging fruit’ of test and development systems, Web servers, print servers, etc.
  • Most respondents (87 percent) have only virtualized less than 40 percent of their servers.
  • Only 12 percent have reached the final phase of virtualization, with more than 40 percent of servers virtualized.
  • On average, respondents have virtualized just 39 percent of their servers.

2. Key Causes of VM Stall
Respondents cited several causes for their inability to expand their virtualization deployments. While budget remains the top barrier to new technology deployment, the leading non-monetary causes for VM stall include:

  • The high operational risk of a failed migration;
  • The inability of available tools to meet management needs; and
  • The lack of resources skilled in virtual server management.

3. Leading Virtualization Objectives
The research further found that at these early stages, the focus tends more toward cost savings. At this stage, the leading objective for virtualization, as cited by 39 percent of organizations, was improved efficiency and headcount reduction. A further 32 percent cited hardware and capital cost reduction as a primary virtualization goal.

Only a few organizations have progressed to a more mature stage of adoption, but here focus shifts toward higher-order business outcomes. At later stages, the flexibility to meet changing business demands was the leading virtualization objective, cited as a primary goal by 43 percent of organizations.

4. Virtualization-Specific vs. Integrated Management
In surveying organizations’ preferences for dedicated vs. integrated virtualization management solutions across 11 different management disciplines, it was clear that management needs vary by maturity, discipline, and scale.

A significant minority of organizations (23 to 33 percent, depending on maturity) expressed a preference for virtualization-specific management tools. However, many respondents (47 to 53 percent) expressed a preference for integrated physical plus virtual management tools.

5. Heterogeneous Hypervisor Support
Moreover, the research data also reinforces the fundamentally heterogeneous nature of most virtualization deployments at their deepest levels. Almost two-thirds of all respondents (65 percent) reported employing multiple hypervisors in at least some capacity (ranging from early evaluation through to full deployment), while almost one-fifth (19 percent) reported employing four or more hypervisors. This heterogeneity also contributes to the difficulty in finding adequate management tools to support virtualization environments.

Visit http://www.ca.com/us/collateral/supporting-pieces/na/Virtualization-and-Management.aspx for more information about the Oliver Wyman research data. Visit ca.com/virtualization to learn about CA Technologies virtualization management solutions.

Snapshot Analysis

If we look at the results of the study, it is clear that 1) stall refers to organizations waiting to adopt virtualization technology and 2) the only type of virtualization that was considered was the use of virtual machine software to create virtual servers.  In that context, the study could be a valuable tool to understand what is happening.

Since my view of virtualization includes seven layers of technology and virtual machine software is only one component of the virtual processing layer, at best I can see the results being useful in a very narrow view of virtualization technology. Virtual machine technology, of course, is a very important part of a virtualized environment, however.

If we restrict our view to the use of virtual machine technology, the findings are totally consistent with portions of the findings of other studies I have seen.  In those areas, this study could be a valuable tool for decision makers. There are other areas, however, in which virtual machine technology is not be the right tool to pull out of the IT developer's toolbox.

There are many reasons organizations adopt virtualization technology (when the term is being used broadly). Some of the more common reasons are:

  • increasing individual application performance - parallel processing technology would be used
  • increasing individual application scalability - workload management technology would be used
  • workload consolidation and optimization - monitoring and automation technology would be used
  • creating a unified management domain to reduce overall administrative costs - system and workload provisioning, monitoring, administration and updating technology would be used.
  • creating a more resilient/ available environment (also includes disaster prevention and recovery) workload movement and failover technology could be used

Thanks Andi for presenting an interesting study!

Editorial standards