Dreams of a dynamic datacenter

Dreams of a dynamic datacenter

Summary: Managers of industry standard system-based datacenters have dreams of moving beyond a static environment. They imagine that their lives would be quite a bit easier if their datacenters did the following things:Automatically found unused and, thus wasted, resources on a moment-to-moment basis.


Managers of industry standard system-based datacenters have dreams of moving beyond a static environment. They imagine that their lives would be quite a bit easier if their datacenters did the following things:

  • Automatically found unused and, thus wasted, resources on a moment-to-moment basis. Resources such as systems, software, storage and networks must be included.
  • Automatically re-purposed those resources in a coordinated, policy-based fashion in order to make the most optimal use of these resources. High priority tasks should be given resources first.
  • Once re-purposed, the organization's datacenter resources should automatically be assigned to useful tasks.
  • Each workload would be provided with the resources it needed without being able to interfere with or slow down other tasks.
  • Unneeded resources would be freed up so they could be powered down to reduce power consumption and heat generation if the organization so desired. These resources could be powered back up, provisioned for the tasks at hand and put to work as needed later.
  • New resources would need to be added only when currently available resources where really exhausted.

What's clear is that everything must adapt in real-time, in a coordinated way otherwise problems are simply being shuffled about rather than really being solved.

Can Those Dreams Become Real?

Over the past few years, virtualization and automation products have become available for industry standard systems, operating systems and applications. It now is possible for an organization to work with a "logical" or "virtual" view of resources. This logical view is often strikingly different than the actual physical view. Company such as Cassatt, Racemi, Scalent Systems and a few others have been extolling the virtues of their products in just such an environment.

If we step back for a moment and ask "What does this really mean?" we soon come to the vision that this technology would make a number of things possible including the following:

  • System users may see a single physical computer as if it were many different systems running different operating systems and application software.
  • A group of systems may present the view that it is really a very large single computer doing the work.
  • Technology may allow individuals to access computing solutions with devices that didn't exist over networking technology that didn't exist when developers created an application.
  • It may also present the image that long obsolete devices are available for use in the virtual environment even though none are actually installed.

The only limits would be those the developers and designers had in their thinking.

In the end, the appropriate use of these layers of technology offer organizations a number of benefits including improved levels of scalability, reliability and performance, far greater agility than possible in a physical environment and more optimal use of hardware, software and staff resources. This requires IT decision-makers to think beyond the server to achieve broader goals.

The Kusnetzky Group knows of many successful implementations of this type of virtualization using products from several suppliers. Resources are discovered automatically and can be automated to meet the organization's own service level objectives and policies.

If your datacenter is now running this way, please tell us what products you've used to make this possible.

Topics: Storage, Data Centers, Hardware


Daniel Kusnetzky, a reformed software engineer and product manager, founded Kusnetzky Group LLC in 2006. He's literally written the book on virtualization and often comments on cloud computing, mobility and systems software. In his spare time, he's also the managing partner of Lux Sonus LLC, an investment firm.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • Thanks to Rich Miller

    Rich Miller, of Replicate Technologies, suggested that I expand upon the themes from Friday's post. Thanks for the suggestion, Rich.

    Dan K
  • The Missing Piece: Middleware Virtualization

    Dan --

    Good topic and post. One of the limitations today is that although virtualization is widely utilized at the server and OS level, the applications themselves are not architected and built with this environment in mind. There are technologies, though, that are enabling virtualizing the software infrastructure -or middleware - such as GigaSpaces and others.

    See my blog: http://gevaperry.typepad.com/main/2007/12/the-missing-pie.html and that of my colleague, Nati Shalom: http://natishalom.typepad.com/nati_shaloms_blog/2007/12/middleware-virt.html
    • Middleware virtualization?

      I define application virtualization to include various frameworks that allow applications to run in an environment that differs from the physical environment or allows advanced functions of workload management, application failover, or even such things at parallel databases.

      It would be good to chat about what you're defining as "middleware virtualization."

      Dan K