X
Tech

Double copy

case study Comprehensive data duplication is necessary in any good disaster recovery plan. But as one organization finds out, the more there is to replicate, the costlier it can be to maintain.
Written by Eileen Yu, Senior Contributing Editor
case study Establishing real-time data replication between the primary and backup site will ensure a company is able to resume business quickly, and right up to the second it was abruptly ended. Organizations should however, manage their users’ expectations about exactly how much data really needs to be duplicated.

BNP Paribas was once ranked (in 2002 by Fortune magazine) the world’s fourth largest bank in revenues earned. With operations in 85 countries covering 13 cities in the Asia-Pacific region and a customer base that includes MNCs and financial institutions, the bank takes its data protection and business continuity plans seriously.

In fact, it has a disaster recovery (DR) plan that stretches over 800 pages, detailing what each department and employee needs to do should the network go down, said Singapore-based Jean-Jacques Girondel, BNP’s head of Asia Pacific infrastructure and operations, Asia Regional Computer Center. Singapore serves as the bank’s centralized back-up location.

“It is detailed and up-to-date, and reflects new instructions every time something changes in the organization,” he said. “We worked with all our service providers who are also involved in the DR exercise. Having a detailed document is very important.”


Jean-Jacques Girondel, head of Asia-Pacific infrastructure and operations, BNP Paribas

BNP invested in a suite of products from storage vendor, EMC, to ensure it can switch over to a backup infrastructure quickly and without fuss. These software and hardware solutions included Symmetrix Networked Storage Systems and Remote Data Facility, PowerPath and TimeFinder—all of which help to facilitate real-time data replication across all operations within the bank.

“Today, I replicate nearly 100 percent of all data,” Girondel said. “It does get complicated when you discuss with your users what kind of data is critical and what is not, because to them, everything is critical.”

Lesser is better
“But it would be more comfortable for us if we could replicate smaller amounts of data because it would then be faster and less costly to retrieve data when there’s a network outage,” he explained. “My dream is to reduce the amount to replicate because it’ll bring down our cost and the effort needed to analyze the information, and ensure data integrity.”

To achieve his goal, he meets every year with the bank’s department heads to identify their requirements and decide what needs to be replicated. They then define (and re-define) data according to priority.

Girondel is also targeting to do away with having to run DR tests twice annually. Instead, he hopes to rotate the role of being the ‘primary site’ among BNP’s various locations across the globe.

He added that conducting too many DR exercises is resource intensive and time consuming.

“It is do-able but we’ll need to make sure that every change is reflected immediately at the secondary site,” he explained. “The two infrastructures must be exactly the same at any one time.”

There is also the issue of cost, where acquiring the bandwidth needed to transfer large amounts of data between two sites—situated in different locations worldwide is still expensive.

This article appears only on the Web version of C|Level Asia.

Editorial standards