Virtualization has the potential of offering tremendous advantages over physical environments, but in the move to virtual, the organization often lost some things it took for granted. One of those things is clarity of what is going on, where, and with whom.
In a physical environment, such as a typical datacenter, all of the systems, network devices, storage devices and even power supplies and air conditioning units are carefully organized and labeled with their IP addresses, with their network names, and often the application it is supporting.
The midrange systems from IBM, Sun or Hewlett-Packard are over in the corner running manufacturing or billing applications, the mainframe that sitting in the center of the datacenter is a whiz at processing thousands upon thousands of transactions, and a racks of industry standard (x86) machines are supporting Web-based applications and the organization’s collaborative systems.
Finding each of these systems in the datacenter is relatively easy. Traditionally, it has also been easy to learn what each of these systems is doing at any given moment. Quite often datacenter resources are grouped according to the application they’re supporting.
As organizations began moving functions from physical systems to virtual systems they ran straight into a new problem. Virtual machines are ephemeral. They can be generated, provisioned with the appropriate software and put into production very rapidly. They can be halted and deleted when they no longer are needed. No labels define their location or presence.
IT executives are increasingly facing the fact that there really is no good way of telling what virtualized applications are running, where they are located at any given moment, which business unit owns them, when they were created, when they should expire, which physical resources they are using and whole host of other questions. It is no longer easy to determine either what physical systems are doing or, more importantly, not doing.
Technology, such as VMware’s Vmotion, Citrix/XenSource’s Xenmotion, and many orchestration products, allow organizations to move virtual systems about at will. Knowing where everything is on a moment by moment basis can be quite challenging.
If an organization needs to maintain an audit trail of where the computing was done, where the data is and other important data points, they increasingly face a very difficult challenge. Complying with some regulations may be impossible unless there is a structured, well-defined way to track everything. Organizations simply must have a trail of their motion, a history, or a “chain of custody” as they transfer from place to place and from test beds to actual production.
What's your plan?
What tools are in use in your organization's datacenter to help discover, track, and manage virtual resources? Who is responsible for managing these resources?