When application failure is not an option

When application failure is not an option

Summary: Here are some thoughts from a conversation I had with the good folks at Datasynapse a while ago. I came across my notes and thought that it might be interesting reading.


Here are some thoughts from a conversation I had with the good folks at Datasynapse a while ago. I came across my notes and thought that it might be interesting reading.

It has become clear that most organizations simply have no time for application failure. Their IT infrastructure simply must be up and available when it is needed. Datasynapse would assert that application virtualization can address this issue while keeping hardware, software and staff-related costs in line. I think that if the technology is properly utilized, they're right.

What's the pain

Organizations live on Internet time now. This means that their information systems and the supported applications must be up and available 24 hours a day, 7 days a week, 365 days a year. There simply is no time for unscheduled downtime. There’s little time for scheduled downtime for system maintenance or backup of critical data. Needing to "keep the store online" in all time zones all of the time means that there simply isn't a good time to "take the system down." IT executives face a great deal of pressure from customers, staff, partners, and now, regulatory organizations to keep their applications available at all time. The imperative they face is that their systems must always perform.

These executives know that if their application systems become unavailable, the organization is likely to lose revenues, customers and, in some cases, face penalties and fines. Customers often won’t wait. If they can’t order the desired products or services, they’ll just hop down the ‘net searching for other suppliers. Staff members and partners face tremendous pressure to be highly productive and they view application failure as simply unacceptable. Regulatory organizations have no patience for late or inaccurate reports.

How can an organization create a plan that provides an environment in which application failure can be avoided? According to DataSynapse, application virtualization makes it possible for key applications to remain available during planned system downtime and during unplanned outages.

Application virtualization Unleashes High Availability

DataSynapse would tell IT executives that application virtualization can address all of the issues mentioned above by unleashing higher availability without unduly increasing costs for hardware, software or administration. This technology can also make it possible for organizations to make the best use of their resources (see the Kusnetzky Group paper Application Virtualization and Utopia: Proving the Value of Virtualizing Applications) and actually improve application performance as well (see Unparalleled Performance: Harvesting the Power of Application Virtualization).

While there are other ways to achieve these ends, the conversations I've had with users of DataSynapse's products lead me to conclude that they're telling the truth.


In the past, application availability was made possible by placing each application system on its own set of redundant servers. Failure of any component could be hidden by having backup systems take over necessary tasks before anyone would notice a failure had occurred. Organizations are now challenged to reduce costs of hardware, software and administration by utilizing virtualization technology without also introducing potential application failure points.


To accomplish this feat, IT executives know that they must adopt technology that allows application components as well as the underlying physical and virtual systems to be carefully monitored. Monitoring only the health of the physical systems that support multiple application components, applications and/or virtual systems just isn’t enough.


It is not enough to gather the details concerning the state of all of the application components, applications, virtual systems and physical systems.  There is just too much information for the IT administrative staff to monitor in real time.  This information must be integrated, decisions must be made on the appropriate actions and these decisions must be put into immediate action. Unfortunately no human being or group of human beings knows enough about what’s happening inside of a complex computing solution or can do this fast enough to make any necessary changes to the environment before an application slow down or failure is seen by those accessing the solution.


Making these key decisions isn’t enough. People simply can not act fast enough unaided. So, other tools must be deployed to act based upon the decisions made by optimization technology. This means giving high priority tasks more resources (processing time, memory, storage and the like) when it appears that their performance is not going to meet the minimum requirements for that application system. This also means reducing the resources allocated to lower priority tasks when that is necessary. Real time adjustment of resource assignments must occur without requiring a great deal of staff time, attention or expertise.

Real benefits

This scenario is not a look into the far future according to DataSynapse. Tools, such as DataSynapse’s FabricServer do all of these things today. This means that organizations can expect that their tasks will be completed on time, on budget and with no waste. Resources, such as systems, software, staff, will all be used in the best possible way.


Organizations needing to stop application failure in its tracks simply must come to understand virtualization technology in general and application virtualization in specific. This technology offers the organization ways to harness all of their industry standard systems, to take back unused processing power and put it to work for high-priority applications. This technology has the ability to manage planned or unplanned outages as well.

Is your organization using virtualization technology to achieve these ends? What products are you currently using?

Topics: Virtualization, CXO, Hardware, Storage


Daniel Kusnetzky, a reformed software engineer and product manager, founded Kusnetzky Group LLC in 2006. He's literally written the book on virtualization and often comments on cloud computing, mobility and systems software. In his spare time, he's also the managing partner of Lux Sonus LLC, an investment firm.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • We have just completed a conversion to virtual.

    Since you've asked, we have just completed switching over from a large data center of discrete servers to a virtual environment. We now use VMware ESX servers running on HP blade hardware. Each frame holds 8 physical blades and each blade runs ~6 VMs (depending on individual load and application requirements).

    Right now we have 10 frames containing 8 blades each running around around 5~6 VMs each for a total of 400+ virtual servers offering a wide variety of services to our worldwide audience. We use VMotion to move virtual machines around the hardware in case of failure or for scheduled maintenance.

    As far as virtual layer is concerned we use Windows 2003 Enterprise, Sun Solaris 10x86, and RHEL 5... each fulfilling different roles as best served by each environment. For example: our e-mail system is MS Exchange, our database is a mix of MS SQL and MySQL in clusters of several VMs, and our web farm is LAMP load-balanced across numerous VMs.

    Our goal is to provide each department of each business unit with exactly what they need in order to transact business upon a common physical environment which world-wide IT can effectively support.

  • RE: When application failure is not an option

    Drink deep the kool-aid, did ya?

    Virtualization is generally a good thing, but improper architecture can create systems and data coupling in ways that impact operations concurrancy and impede recovery. The increased applications density can increase complexity and vunerability to outages. All business and technical risks need to be considered for the full lifecycle of ALL applications and interdependancies need to mapped. Virtualization should be implimented in a way aligned with both efficient operations as well as business continuity objectives.

    Virtualization is not the silver bullet as advertised. Cut back on the kool aid.
  • RE: When application failure is not an option

    The use of virtualization not only improves application redundancy, monitoring, optimization, and automation - it also improves overall software infrastructure change management. Virtualization gives IT groups a way to change, test, stage, and analyze the impact of changes to a representative infrastructure to minimize downtime.
  • Does It?

    Can you tell us whether this technology allows for things like application updates, database schema changes and the like?

    This is a fact of life in anything more than very simple systems and if we're really talking about eliminating downtime completely a solution must be able to provide these facilities without so much as a screen flicker for the user.

    Can it run across multiple sites to eliminate the possibilty of environmental impacts like fire, flood, telecoms failure,..?
    Again, these are the reality.
  • oh oh

    This blog reads less like an attempt to pass on information than an overly long unpaid ad for DataSynapse. It may be a good and decent product but let's remember that any discussion on virtualization needs to be much broader than thinly disguised praise and attention to a single product.

    When I read these things I'm looking for information not an advertisment or endorsement of a single product.
  • I'm sorry you see it that way

    If you will scan some of the other posts here, you will certainly see that many of the posts are the result of conversations with executives of companies that supply virtualization technology or their customers. So, this post certainly is not out of the ordinary.

    This post was written to summarize a conversation with executives of DataSynapse and, as such, presented their view of the world. I believe that the issues covered are of interest to readers regardless of the tools they use to resolve those issues.