This is a segment of a recently published Kusnetzky Group paper. I thought you might enjoy reading it here.
The appearance of virtual systems and virtual resources in the industry standard part of the data center has offered the hope of increased flexibility and reduced costs. It has also introduced challenges that were not part of the operational plans of many organizations. Have you seen this in your organization?
In a recent post, Virtual Machines - The Challenge of Vision, I explored the issues associated with tracking and monitoring virtual environments. This time, we take the next step. If we fix the visibility challenge, what about the manageability challenge?
What polices are in force in the virtual world?
It is clear that virtual environments, like physical environments, need to function according to the policies set by the IT organization which are in turn a representation of the organization's overall policies and strategy. Best practices and execution of policies have become routine when it comes to the management of physical systems and resources in most medium and large organizaitons. Having succeeded in the physical space, most organizations have chosen to extend these practices as they move applications and systems into virtualized environments. Whether this is enough is yet to be seen. Industry standard systems that support virtual resources are quite different from what's been seen in the past in this part of the datacenter.
What do organizations want in a solution to the Virtualization management puzzle?
Virtualization introduces new elements to the puzzle of policy management, including the following:
- Network access to both virtual and physical resources must be controlled. This includes both physical to virtual and virtual to virtual communications.
- Mingling tasks running the same or different operating systems on the same machine under a single hypervisor has many implications ranging from application performance management to availability/failover management to compliance management.
- Virtual resources can be created, used and then destroyed much more easily than physical resources. Who manages this process? Who maintains logs for auditing purposes? Physical systems can easily become clogged with "ghosts" from the past that are no longer wanted or needed.
- It is easily possible for a malicious person to create a rogue copy of a virtual machine to breach security or management procedures. A non-malicious case would be someone putting up a virtual server to support a music or video sharing service. On the one hand, this could lead to theft of critical data. On the other it could lead to legal disputes over the sharing of digital content.
Who is responsible for administering organizational policies in this area? Can the organization be certain that all copies of highly proprietary data and applications are accounted for? What is the policy when "extras" are discovered?
IT executives are increasingly facing the fact that there really is no good way of dealing with the following issues:
- Determining what policies are best suited to address the challenges described above
- Having a mechanism to set those policies consistently, throughout the virtual infrastructure
- Insuring and reporting on the success of the policy enforcement in the environment
What is clear is that it is no longer easy to determine either what physical and virtual systems are doing, who they are doing it for or, more importantly, what they are not doing (outages).
Everyone is talking but, do they have a solution?
If one does a quick scan of industry announcements, it's easy to see that the suppliers of virtual machine software have focused their efforts on developing tools designed to manage their own hypervisors, not the multitude of virtual machines they enable. The companies focused on providing broad management frameworks have focused on managing physical resources and haven't yet stepped up to managing virtual resources. Knowing what everything is doing on a moment by moment basis can be quite challenging and is likely to require tools that are only now emerging.
If an organization needs to maintain an audit trail of where the computing was done, where the data is and other important data points, they increasingly face a very difficult challenge. Complying with some regulations may be impossible unless there is a structured, well-defined way to track everything. Organizations simply must have a trail of their motion, a history, or a "chain of custody" as they transfer from place to place and from test beds to actual production.
While a good configuration management tool is a good place to start, they usually require agents or some other form of instrumentation be installed. This approach, of course, won't work in the case of a rogue virtual system.
What are the characteristics of an ideal solution?
To enable proper lifecycle management of this sort, a solution needs to be:
- Cognizant of the distinct qualities of the virtual environment
- Able to seamlessly incorporate into that environment, through easy deployment and scaling , and ultimately, support multi-platform datacenters
- Synergistic with the goals of the environment - and thus not tax the performance of the infrastructure or create bottlenecks.
I suspect that there are many other areas that should be addressed over time. What areas do you think have been omitted?