X
Business

Make sure virtualization isn't the next big mess

Commentary--Virtualization offers immense opportunities for cost savings--with the proper up-front analysis and planning. CiRBA's Andrew Hillier tells how to avoid the pitfalls.
Written by Andrew Hillier, CiRBA, Contributor
Commentary--Virtualization is gaining popularity as the leading edge solution to server sprawl in the data center. However, few have considered the potential pitfalls of large scale adoption. Virtualization offers immense opportunities for cost savings—with the proper up-front analysis and planning.

Virtualization, in some cases, can create more problems than it solves. It helps IT create a fluid environment where virtual servers can be turned on or off with a few mouse clicks. This is powerful, but dangerous if you don't know the impact of each change beforehand.

From a planning and execution perspective, virtualizing a small set of servers can be a relatively easy process. Those implementing a small scale solution are usually familiar with all of the physical devices, workload and configuration details and other key considerations involved. However, bringing virtualization to an enterprise environment is a different story entirely. Large scale virtualization projects necessitate a data-driven approach, carefully evaluating elements, such as asset identification, business considerations, technical constraints, and workload patterns.

Most virtualization initiatives are approached as tactical exercises in making sure the technology works at a base level. The focus becomes "do these assets fit together on this server?" without consideration of “should they reside on the same server?” When virtualization is rolled out on a large scale, it needs to be part of an overall consolidation strategy in the data center that is supported by sufficient analysis and planning. Testing in the lab is not enough. This only evaluates technical issues without taking into consideration important business factors that the company will face when they move into the production environment.

Planning for virtualization is more than a sizing exercise. Virtualization analysis and planning should include:

Managing inventory: The “one box per application” mentality—along with continuous hardware upgrades with greater computing power—has driven a proliferation of diverse servers that are increasingly underutilized. Most organizations don’t have a strong enough discipline around purchasing or asset management, making it difficult to inventory servers. Once organizations move into the virtual world—where you can create a logical machine without having any paper trail—this problem grows exponentially. Organizations will be caught in the same trap as they currently are with physical servers: not knowing which servers exist, who created them, which applications they support and whether or not they need them. This has a direct impact on licensing costs and ongoing management issues, which threaten the cost savings of virtualization. Because of this, organizations need to put technologies and processes in place for tracking and managing the rules around the implementation of virtual servers.

<="" b=""> Although virtualization is not a sizing exercise, workload patterns must be carefully scrutinized to optimize stacking rations and minimize operational risk. Some of the most important aspects of workload analysis, such as complementary pattern detection and time-shift what-if analysis, are often overlooked when determining if workloads can be combined. This can lead to problems such as unnecessarily limiting the possible gain in efficiency or failing to leave enough headroom to cushion peak demands on the infrastructure.

Measuring the aggregate utilization versus the capacity is another important factor. This provides critical insight into pre-virtualization utilization levels and patterns, and shows both the maximum capacity reduction that is possible as well as the estimated utilization target for the virtualization initiative.

Technical constraints play a large role: The type of virtualization solution being used will dramatically affect technical limitations of the virtualization initiative. For example, when analyzing for VMWare ESX, there are relatively few constraints put upon the operating system configurations since each VM will have its own operating system image in the final configuration. In these cases, hardware, network and storage requirements and constraints play a major role in determining the suitability of a given solution. Alternatively, applications being placed into Solaris 10 Zones will “punch through” to see a common operating system image, and analysis in this situation should therefore factor in operating system compatibilities as well.

Technical constraints are uncovered through variance analysis on the source hardware, and will often uncover configuration outliers, such as token ring cards, IVRs, proprietary boards, direct-connect printers, or other items not part of the standard build. Not taking these into account could impact the initiative. Rules-based configuration analysis is also important for revealing the best regions of compatibility across the IT environment. These regions represent areas of affinity that are strong candidates for VM pools and clusters.

Including business constraints in the analysis: Organizations also need to consider availability targets, maintenance windows, application owners, compliance restrictions, and other business issues. Most virtualization planning tools provided by VM vendors don’t go beyond high-level configuration and workload analyses, yet businesses cannot afford to stop there. It’s not unheard of to put together candidates for virtualization solely on technical constraints and not have a single time in the calendar when the physical server can actually be shut down for maintenance. Political issues also arise when departments don’t want to share hardware resources, and chargeback models may also break down if resource sharing crosses certain boundaries. Sometimes these can be overcome, but when they can’t, key business constraints play a critical role in the analysis.

Security and compliance issues: Organizations creating virtual servers also need to carefully consider their storage strategy. This means making sure rules governing access to the data are in place and enacting a proper SAN architecture so that data is not stored on virtual machines. In the virtual world, a whole machine is essentially one file, and there is often a single administrator that has access to many of these files. This can inadvertently create a de facto “super super” user role in the organization, which has many security ramifications. Separating data from the application and ensuring tight security permissions is therefore, essential in order to assure the integrity and privacy of critical data.

Compliance is another key issue that organizations need to consider. Similar to the security precautions discussed above, often it is necessary to determine whether or not certain applications or data files should sit on the same server. Regulation prevents organizations in some industries from sharing customer data between divisions or departments. Good virtualization analysis looks for these vulnerabilities, providing a risk matrix that helps the organization ensure it’s not violating any compliance or security rules.

Conclusion
Conducting “what if” scenarios on live production servers by testing potential solutions and backing them out again is asking for trouble. Organizations planning large scale virtualization initiatives need to invest in proper planning and analysis up front to ensure that critical factors that can’t be found in any lab are accounted for. By recognizing that virtualization is not simply a tactical exercise of assessing workload activity, but instead, a comprehensive resource optimization strategy that requires input from the business, organizations will be much more likely to realize the promised cost savings. The best approach is to perform analysis beforehand in a safe environment to avoid pitfalls—ensuring a coherent, stable infrastructure from the moment of rollout.

biography
Andrew Hillier is co-founder and CTO of CiRBA. You can reach him through www.cirba.com.

Editorial standards