How to maximize performance in virtualized environments

Virtualization so far has found a home within pre-production and less-than-critical use--and has yet to find its way into more business-critical area, says OpTier's Motti Tal.
Written by Motti Tal, Contributor on
Commentary--There’s no doubt about the great potential of virtualization to cut IT hardware and management costs, and utilize IT resources even more efficiently--all-the-while providing a more agile IT infrastructure. Yet, so far, virtualization, especially for mission critical services at larger enterprises, has been more promise than reality.

The truth is that virtualization so far has found a home primarily within pre-production and less-than-critical uses, such as application development and testing labs. It is yet to be found widely within such business-critical areas as financial services portals, trading platforms, sales and distribution applications, or other vital applications.

There are a number of good reasons for this; yet, they are surmountable.

First, business and IT managers of critical applications are rightly concerned that a shared virtualized environment means they could lose control over their application performance. The reasoning: virtualized environments often are shared by design, so when conflicting computing demands arise, it could very well be their departments’ systems that get squeezed. In fact, many organizations don’t have objective records of application service levels or resource allocations available to even attempt to assess such impact.

There’s also this misconception that virtualization, all by itself, will deliver considerable savings. Sure, there are initial savings to be made by consolidating many physical servers to fewer physical servers with many virtual instances. But you can only consolidate servers this way once. The full, and larger return on investment from virtualization comes through the dynamic provisioning of resources whenever needed – whether that be applications, servers, databases, or even entire networks.

At first blush, this might seem counterintuitive. It would appear the cost savings and agility of virtualization would be straightforward: what better technology than virtualization (OS, VMs, or application grids), to expand and contract workloads with business need? The unfortunate reality is that no benefit comes without a cost, or a tradeoff. And while virtualization helps to simplify many aspects of IT, it ends up adding complexity elsewhere.

For instance, when compute nodes truly are virtualized, such as in the case of grid infrastructure, they certainly can optimize transaction performance. To do this, they distribute complex tasks, such as a risk calculation compute work, across multiple nodes in parallel. Now, imagine hundreds of such transactions computing across a cloud of hundreds of compute nodes, rarely performing the same computation twice on the same nodes, and each time using other parts of the grid cloud. Sometimes a transaction will spread across 100 nodes. Other times 150 nodes.

In a maze such as this, how would administrators even begin to go about troubleshooting a slow calculation? The same condition is true for all applications relying on multiple virtualized machines, load balancers, and application servers or database clusters. These complex virtual environments create dependencies that, when broken, or are mis-configured, or mis- provisioned are challenging to visualize, let alone to diagnose and fix.

In addition, you need the ability to allocate IT resources properly, based on the volume of activity and business need. You don’t want large back-office transactions slowing down real-time financial transactions. And if something goes wrong, or service level agreements (SLAs) aren’t being met because of sluggish performance, you don’t want to spend two or three times the amount of time identifying the problem area because your shared virtualized environment clouds your ability to see application and transaction dependencies from start to completion.

This can be a complex problem to resolve because many of the performance and system management tools available only are able to manage the virtualization layer, not the applications or transactions within.

One of the few ways to attain the needed level of visibility into the IT architecture, identify application and transaction dependencies on virtual resources, measure SLAs, guarantee optimal quality of service, and increase the savings achieved by virtualization, is through an emerging technology known as Business Transaction Management (BTM). With the proper implementation of BTM, IT managers can concurrently measure the performance of their applications from their users’ perspective and gain granular visibility into how each transaction flows across the entire infrastructure and utilizes shared resources. As a result, they can optimize the performance of their transactions, applications, and the infrastructure itself, to provide better customer experience and deliver the quality-of-service that business units expect at lower cost.

A recent IDC research report, Business Transaction Management: Facilitating the Management of Virtual Environments, noted the value of BTM for virtualized environments. The report found that because BTM focuses on transactions for managing applications, BTM brings a valuable perspective to managing these dynamic infrastructures. This is made possible, in large part, by the “granular visibility into each transaction executed by any user at all times,” brought by BTM, the report notes. “This level of granularity provides a business focused mechanism for understanding and controlling multiple moving parts--both virtual and physical at the infrastructure level--from a business perspective,” it continues.

At its essence, BTM leverages the power of business transactions that flow through an organization’s IT infrastructure, whether virtualized or not, and results in greater understanding of the service quality, flow and dependencies among virtual partitions, grid engines, cluster nodes, databases, servers, and applications throughout the entire transaction lifecycle. In this way, BTM technologies can help virtualization live up to its promises of cost savings and increased business agility.

Companies can take a number of steps to bring about higher levels of performance, visibility, and management to virtualizations, even in a shared environment, and provide clarity through the entire transaction lifecycle:

Conduct a pre-migration application study. Prior to transitioning to a virtualized environment, it’s crucial that application performance baselines be set and expectations across business groups be aligned. This ensures that appropriate performance goals and SLAs are set by an objective measure that can be met, such as the amount of business activity as it corresponds to your service levels and resource consumption records per activity.

Conduct a post-virtualization migration application study. Once the application has been migrated to the virtualized environment, it needs to be examined so that all of the effects of virtualization on transaction flow and performance can be identified. Given the characteristics of the virtualized environment, you will need to understand the impact of virtualization on your new production environment and adjust your configuration to achieve a successful migration.

Align application performance with business needs. These application and transaction performance baselines provide better understanding of how customer transactions, service levels, and IT resource utilization fit business needs. It then becomes possible to understand, for instance, which applications and services are meeting their service levels, as well as which are not, and adjust workloads and resources accordingly.

Maintain virtualization visibility. With newfound visibility into a virtualized environment, BTM makes it possible to identify causes of negative effects on application performance. Thus, managers of virtualized environments can identify developing problems quicker and fix them faster.

In this way, IT managers, business executives, and all other stakeholders can attain the visibility they need to ensure that their infrastructure is set to provide optimal service levels consistently--and that their applications continuously get the necessary resources to do so.

It’s clear that IT departments need to move toward managing their virtualized environments dynamically if cost savings and agility goals are to be met. By combining virtualization with Business Transaction Management, organizations have the tools and the processes in place to get there.

Motti Tal is executive vice president for Marketing and Business Development at OpTier.

Editorial standards