X
Business

Dealing with Disaster

Although there is nothing IT departments can do to prevent natural disasters, Hybrid clouds enable IT to provide a great foundation for redundancy, which will ensure that mirrored infrastructure and applications remain intact when disaster strikes.
Written by John Rhoton, Contributor

Hurricane Sandy took many by surprise in the Northeast last year. The disaster caused approximately $75 billion in damages. Severe flooding destroyed the equipment in many datacenters and led to power and communications outages that made infrastructure temporarily unusable.

Disasters aren’t new. They’ve been around for about as long as the earth has been in existence. We can’t avoid them. But we can make businesses resilient to them, particularly when it comes to IT. Perhaps in the wake of Hurricane Sandy, we can learn from some of our mistakes.

The key to continuity is redundancy. If the infrastructure and applications are mirrored to multiple instances, any one of them can go down – but the system will remain intact.

A hybrid cloud is a great foundation for this redundancy. By definition, it encompasses at least two delivery models, one on-premises and one in the public cloud. But, more than that, it implies a flexible design – one that is independent of the source of its services. This is an exciting shift in technology because it enables the integration of data distribution services running across multiple facilities.

These services can be dispersed across multiple dimensions to minimize exposure in the event of an outage. For example, if they are geographically distant from each other it is less likely that they would be susceptible to the same natural disaster. If there are separate organizations operating the data distribution services, for example, with replication between on-premises environments such as Windows Server 2012 and Windows Azure, then any fault in processes and procedures would not be likely to affect both systems at the same time. The services are also likely to have some technical differences, at least in terms of configuration settings, making it unlikely that the duplicated data would be affected by the same error.

There are different ways we can integrate data distribution services to help with business continuity. It really depends on how much we can afford to invest, how fast we need to be able to failover and how much data, if any, we can afford to lose in the process.

The most basic and cheapest solution to disaster recovery is a cold site. It involves little more than installing and testing the application in a secondary facility. In the event of a failure in the primary datacenter, it would be necessary to package and transport the data to the secondary site. While this may be sufficient for very low-priority services, a more common – and pricier – approach is a warm site, which differs in having periodic backups onsite to accelerate the recovery.

A hot site involves continuously replicated data, for example using Hyper-V Replica. Clearly this is more complex to implement, and it involves the additional expense of operating duplicate infrastructure. However, in the event of a failure and subsequent data loss, hot sites have the compelling benefit of allowing you to return to the duplicate infrastructure.

There is no reason that all services in the datacenter need to adopt the same approach. It is possible – and advantageous – to mix all three, setting up a hot site for mission critical applications, while relying on warm or cold options for less important systems. The key is to ensure that there is sufficient redundancy for the business to continue even when there is a disaster. There will be more disasters. However, we are in a better position than ever to minimize the damage these disasters create, and to get things back to normal shortly thereafter.

Editorial standards