Here are some thoughts from a conversation I had with the good folks at Datasynapse a while ago. I came across my notes and thought that it might be interesting reading.
It has become clear that most organizations simply have no time for application failure. Their IT infrastructure simply must be up and available when it is needed. Datasynapse would assert that application virtualization can address this issue while keeping hardware, software and staff-related costs in line. I think that if the technology is properly utilized, they're right.
What's the pain
Organizations live on Internet time now. This means that their information systems and the supported applications must be up and available 24 hours a day, 7 days a week, 365 days a year. There simply is no time for unscheduled downtime. There’s little time for scheduled downtime for system maintenance or backup of critical data. Needing to "keep the store online" in all time zones all of the time means that there simply isn't a good time to "take the system down." IT executives face a great deal of pressure from customers, staff, partners, and now, regulatory organizations to keep their applications available at all time. The imperative they face is that their systems must always perform.
These executives know that if their application systems become unavailable, the organization is likely to lose revenues, customers and, in some cases, face penalties and fines. Customers often won’t wait. If they can’t order the desired products or services, they’ll just hop down the ‘net searching for other suppliers. Staff members and partners face tremendous pressure to be highly productive and they view application failure as simply unacceptable. Regulatory organizations have no patience for late or inaccurate reports.
How can an organization create a plan that provides an environment in which application failure can be avoided? According to DataSynapse, application virtualization makes it possible for key applications to remain available during planned system downtime and during unplanned outages.
Application virtualization Unleashes High Availability
DataSynapse would tell IT executives that application virtualization can address all of the issues mentioned above by unleashing higher availability without unduly increasing costs for hardware, software or administration. This technology can also make it possible for organizations to make the best use of their resources (see the Kusnetzky Group paper Application Virtualization and Utopia: Proving the Value of Virtualizing Applications
) and actually improve application performance as well (see Unparalleled Performance: Harvesting the Power of Application Virtualization
While there are other ways to achieve these ends, the conversations I've had with users of DataSynapse's products lead me to conclude that they're telling the truth.
In the past, application availability was made possible by placing each application system on its own set of redundant servers. Failure of any component could be hidden by having backup systems take over necessary tasks before anyone would notice a failure had occurred. Organizations are now challenged to reduce costs of hardware, software and administration by utilizing virtualization technology without also introducing potential application failure points.
To accomplish this feat, IT executives know that they must adopt technology that allows application components as well as the underlying physical and virtual systems to be carefully monitored. Monitoring only the health of the physical systems that support multiple application components, applications and/or virtual systems just isn’t enough.
It is not enough to gather the details concerning the state of all of the application components, applications, virtual systems and physical systems. There is just too much information for the IT administrative staff to monitor in real time. This information must be integrated, decisions must be made on the appropriate actions and these decisions must be put into immediate action. Unfortunately no human being or group of human beings knows enough about what’s happening inside of a complex computing solution or can do this fast enough to make any necessary changes to the environment before an application slow down or failure is seen by those accessing the solution.
Making these key decisions isn’t enough. People simply can not act fast enough unaided. So, other tools must be deployed to act based upon the decisions made by optimization technology. This means giving high priority tasks more resources (processing time, memory, storage and the like) when it appears that their performance is not going to meet the minimum requirements for that application system. This also means reducing the resources allocated to lower priority tasks when that is necessary. Real time adjustment of resource assignments must occur without requiring a great deal of staff time, attention or expertise.
This scenario is not a look into the far future according to DataSynapse. Tools, such as DataSynapse’s FabricServer
do all of these things today. This means that organizations can expect that their tasks will be completed on time, on budget and with no waste. Resources, such as systems, software, staff, will all be used in the best possible way.
Organizations needing to stop application failure in its tracks simply must come to understand virtualization technology in general and application virtualization in specific. This technology offers the organization ways to harness all of their industry standard systems, to take back unused processing power and put it to work for high-priority applications. This technology has the ability to manage planned or unplanned outages as well.
Is your organization using virtualization technology to achieve these ends? What products are you currently using?