Managers of industry standard system-based datacenters have dreams of moving beyond a static environment. They imagine that their lives would be quite a bit easier if their datacenters did the following things:
- Automatically found unused and, thus wasted, resources on a moment-to-moment basis. Resources such as systems, software, storage and networks must be included.
- Automatically re-purposed those resources in a coordinated, policy-based fashion in order to make the most optimal use of these resources. High priority tasks should be given resources first.
- Once re-purposed, the organization's datacenter resources should automatically be assigned to useful tasks.
- Each workload would be provided with the resources it needed without being able to interfere with or slow down other tasks.
- Unneeded resources would be freed up so they could be powered down to reduce power consumption and heat generation if the organization so desired. These resources could be powered back up, provisioned for the tasks at hand and put to work as needed later.
- New resources would need to be added only when currently available resources where really exhausted.
What's clear is that everything must adapt in real-time, in a coordinated way otherwise problems are simply being shuffled about rather than really being solved.
Can Those Dreams Become Real?Over the past few years, virtualization and automation products have become available for industry standard systems, operating systems and applications. It now is possible for an organization to work with a "logical" or "virtual" view of resources. This logical view is often strikingly different than the actual physical view. Company such as Cassatt, Racemi, Scalent Systems and a few others have been extolling the virtues of their products in just such an environment.
If we step back for a moment and ask "What does this really mean?" we soon come to the vision that this technology would make a number of things possible including the following:
- System users may see a single physical computer as if it were many different systems running different operating systems and application software.
- A group of systems may present the view that it is really a very large single computer doing the work.
- Technology may allow individuals to access computing solutions with devices that didn't exist over networking technology that didn't exist when developers created an application.
- It may also present the image that long obsolete devices are available for use in the virtual environment even though none are actually installed.
The only limits would be those the developers and designers had in their thinking.
In the end, the appropriate use of these layers of technology offer organizations a number of benefits including improved levels of scalability, reliability and performance, far greater agility than possible in a physical environment and more optimal use of hardware, software and staff resources. This requires IT decision-makers to think beyond the server to achieve broader goals.
The Kusnetzky Group knows of many successful implementations of this type of virtualization using products from several suppliers. Resources are discovered automatically and can be automated to meet the organization's own service level objectives and policies.
If your datacenter is now running this way, please tell us what products you've used to make this possible.