X
Business

Continuous processing - a conversation with Stratus

With all of the different ways to make a workload continue to execute when there is a failure, when is hardware-based continuous processing the best choice?
Written by Dan Kusnetzky, Contributor

David C. Laurello, President and Chief Executive Officer, Chairman of the Board of Stratus Technologies, took a few moments out of his busy schedule to discuss his company's views on continuous processing in physical, virtual and cloud environments. It's been a while since we had a chance to speak and it was good to catch up with him. The conversation touched on a number of different concepts and what Stratus is doing. The company's goal is to provide tools to make "continuous processing" straightforward and easy to use.

Different requirements

The conversation began with the fact that organizations have different requirements for reliability, availability and performance for critical and important workloads. Some workloads simply must be available at all times or the organization will fail. Other workloads can experience outages of a few hours and the organization's function can continue. Still other workloads can experience downtime ranging from days to weeks without harming the organization to any great degree.

The conversation then turned to the many different approaches to increasing the levels of uptime that have been used over the last few decades. Which is the best choice depends upon the application architecture and where the application and its components are executing. Different approaches increase in popularity depending upon whether the application is monolithic, multi-tier, or multi-tier distributed. Add execution directly on a physical host, on a virtual system or executing as part of a cloud-base service to the mix and the choice of the best approach can become very complex.

Here are some of the approaches touched on during our conversation:

Clustering

One of the more common approaches is to cluster systems together to create what appears to be a single computing solution. Systems are connected together using some form of high speed interconnect . Optionally, the clustered systems may share network attached or local storage. Some approaches require shared network attached storage.We then went on to discuss the different types of clusters organizations deploy. Even though the hardware configuration can be identical, different types of software can be deployed based upon the organization's goals. Here are the software approaches that were touched on during the conversation:

Access virtualization clusters

Clusters can be created using access virtualization, application virtualization, or several different forms of processing virtualization. The goals for each type of cluster differ from the others. Let's examine each type separately.

Access virtualization clusters are designed to make access to individual applications survive the loss of a system. Applications are hosted on multiple systems. Access to those applications is made possible using an access virtualization product such as Microsoft Terminal Services, Citrix XenDesktop or VMware View.

Access virtualization clusters depend upon the fact that each application is replicated multiple times, once on each server in the cluster. Users access these applications through access virtualization technology. If an application fails, the user is prompted to log in again. When they do, they will be attached to another system. This implementation may require costly changes to be made to the application itself to make it cluster-aware.

Several industry suppliers looked at the various forms of HA software and thought that the approaches used in building fault tolerant systems and in continuous processing in general would allow high levels of availability to be offered using a much simpler configuration and operations. Furthermore, storage virtualization technology, such as network-attached storage, is required to make sure that applications have access to the same database. Shared storage is traditionally an expensive hardware purchase.

In the instance of hardware or system failure, incomplete transactions will be lost, but it will be possible for the user to still have access to needed applications.

Application virtualization clusters

Application virtualization clusters are designed to make individual applications survive the loss of a system. As with access virtualization clusters, applications are hosted on multiple systems. A workload management system accepts the user's request for an application and connects them to the system having the greatest available capacity.

As with access virtualization clusters, application virtualization clusters depend upon the fact that each application is replicated multiple times, once on each server in the cluster. Not all applications are able to be virtualized off-the-shelf, and may require additional application development for legacy applications.

What is different is that the clustering is managed at the application layer rather than at the access layer. Like access virtualization clusters, if an application fails, the user is automatically switched to the same application running on another system. Furthermore, storage virtualization technology, such as network attached storage, is required to make sure that applications have access to the same database.

Uncompleted transactions can still be lost, but it will be possible for the user to still have access to needed applications.

Processing virtualization clusters

There are two different forms of processing virtualization clusters. One depends upon a cluster manager that makes it possible to see multiple systems as a single computing resource. Another form of processing virtualization based clusters is based upon the use of server virtualization technology combined with performance monitoring software, orchestration software and virtual machine movement software.

Using a cluster manager

If the cluster manager detects a slow down or failure, applications may be moved from one system to another. The applications must be written to be cluster aware. Making a single-tiered application “cluster aware” is one level of complexity. The bigger challenge comes when with you are dealing with a complex multi-tiered application (i.e. web access layer, middleware and backend DBMS). Making a multi-tiered application “cluster aware” requires that you synchronize state information between all nodes and all application layers. Since most modern applications are multi-tier, multi-system and distributed, synchronizing all layers of an application and making sure that the right data gets to the right system at the right time can be a nightmare. 

This form of processing virtualization requires multiple systems, storage virtualization and applications written to be cluster aware. Implementing this form of processing virtualization requires expertise in clustering.

If the organization is using an application that was written to be cluster-aware, data will not be lost if there is a failure. It can take quite some time, however, for this form of cluster to respond to an outage. The cluster manager has to examine each process that is executing, determine what cluster resources it is using and then assign that application to one of the remaining systems.

Virtual Server failover Clusters

Server Virtualization with Failover is an attempt at availability where an orchestration manager monitors these virtual servers. If a slow down or failure is detected by the orchestration manager, virtual machines are moved to another clustered system by virtual machine movement software, such as VMware's vMotion.

This form of processing virtualization is a bit less complex than using a clustering manager. Applications don't need to be specially developed to use a cluster manager. The organization needs, however, to have expertise on staff in virtual machine software, orchestration managers, storage virtualization and virtual machine movement software. If a failure occurs an entire virtual machine is moved to another machine, resulting in the transfer of a large amount of information. Uncompleted transactions can be lost. Furthermore, users may experience a considerable delay while a virtual machine is being moved from one physical system to another.

Dave then went on to discuss the fact that clustering approaches often offer high levels of availability, but the failover time can cause problems for applications that require immediate response at all times.

Alternative to Clusters

Dave pointed out that Stratus has focused on a different approach, continuous processing. Continuous processing is using redundant hardware or software components to create a never-fail environment on a very simple configuration. No clustering manager is required. Potential failures are detected long before than can have an impact and are prevented. Virtual servers would simply begin execution on the other system if the original host failed.

Dave also pointed out that Stratus' ftServer family of fault tolerant computing systems are often utilized to make virtual server-based environments more highly available as well.

Looking to the future

Cloud computing environments have the same requirements for availability, but face additional challenges including dealing with workloads from multiple tenants that must be kept separate and the extreme scale of workloads that require high levels of availability. Although Dave didn't specific what Stratus is doing about these additional requirements, I would suspect that the company's Avance combined with the technology from Stratus' recent acquisition of Martathon Technologies will play a big role.

---

Note: Stratus Technologies is a Kusnetzky Group client.

Editorial standards