'

UnionBank moves with the times

After 10 years on the mainframe, Union Bank of the Philippines migrates to a new core banking system.



CIO 1-on-1 Every business knows it has to move with the times and stay in tune with the market if it wants to establish a better relationship with its customers.

So when the Internet emerged as an alternative communication and distribution channel in the mid-1990s, Union Bank of the Philippines (UnionBank) knew it couldn't just ignore the digital medium.

Ceferino P. Toletino, the company's head of IT, said UnionBank embraced the Internet and worked the platform into its overall business model. "One thing the bank's senior management realized about seven or eight years ago, was that the Internet would be a major battleground. So we started on it early, becoming the first bank to get an Internet banking license and designed our infrastructure to suit this business model," he told ZDNet Asia.

With a reputation of being "tech-savvy", UnionBank focuses on a strategy to tap on the latest technologies that will enable it to respond quickly to market changes. And when the bank sees an opportunity to introduce new products more quickly and with less fuss, it certainly will not hesitate to take it.

On Apr. 25, 2005, UnionBank went live with Infosys' Finacle core banking software. The bank replaced its 10-year-old mainframe legacy systems to achieve greater business agility and lower total cost of ownership.

The new system, which runs on Sun Fire servers, Sun StorEdge arrays and the Sun Java platform, has been successfully deployed in over 110 branches across the country, according to Ceferino. More than 700 users log onto to Finacle concurrently every day.

The bank's IT head discusses UnionBank's reasons for changing its core banking system, shares the lessons it learnt from adopting a "big-bang" implementation approach.

Q. What were the limitations of the legacy system?
A. The main limitation of our previous core system was that it was not implementation-friendly. That means, if a business unit wanted to offer a client a product or service which was custom-designed and custom-built for that client, the operations people had to design special processes to implement the new product.

IT systems had to be built, tested and deployed. QA (quality assurance), audit reviews and user training also had to be conducted. This took an inordinate amount of time, and management felt that the time taken to market new products would decrease significantly if we replaced what we had, with a platform that was business user-friendly and easy to maintain.

IT systems had to be built, tested and deployed... This took an inordinate amount of time, and management felt that the time taken to market new products would decrease significantly if we replaced what we had, with a platform that was business user-friendly and easy to maintain.

What products and services will the bank now be able to develop and roll out with the installation of the new system?
The infrastructure we deployed does not cater to any one specific type of product or service. In legacy systems, each new product or service offered by a bank requires endless hours of operational planning and system development. With our current setup, almost all types of back-end operational workflow and front-end interfaces are already built into the system.

Since the interfaces to the core system are standard, we can bring a new product to the market within a very short time-span by taking a mix-and-match approach to building back-end operations and front-end interfaces.

In other words, the business no longer has a major dependency on IT to deploy a product. Rather, the new system allows them to develop a product and market it with minimal help from IT.

In terms of other benefits, can you give examples of process efficiencies that will be gained from this?
One big advantage is the opportunity to have more detailed transactional information stored in a true relational database that is scalable, as well as have greater ease of integration with other external production systems.

With this information, backroom operations can be supported by systems that are automated to validate transactions such as bill payment, regardless of the delivery channel including over-the-counter, ATMs (auto teller machines), the Internet, phone banking and mobile banking. We did this centrally for all our bill payment officers, freeing up our branches which were previously tasked with manual processing.

Another advantage with the new system is that we now have more information available in the area of clearing. We've been able to address issues with un-posted, returned checks much faster because of the additional details--related to each check--that are now available to us. In due time, we will look into automating the outward clearing of checks.

The bank expects to reduce the total cost of ownership in acquiring the new system. How much reduction is the bank hoping to achieve, and by when?
Having moved from mainframe to open systems, we save in terms of operations and maintenance. We are still in the stage where we're looking at new ways to gain more benefits from the newly installed system, and improve our processes to maximize the its capability.

Why did the bank choose the "big-bang" approach which has been said to pose technical and business challenges for some roll outs? What potential issues or concerns did the bank consider and how did the bank overcome them?
All processing is centralized and teller functionality is browser-based. This strategy was employed so that there is a single point of control for all systems. The piecemeal approach to branch conversion is valid only where a portion of the teller application resides at the branch level. It then really becomes more of a physical limitation since you have a finite number of people to go around and install software at the branches.

The big-bang approach is suited for a centralized application deployment which the bank employed with this software. All branches use an application which is centrally implemented using Internet technologies. Since there was no physical limitation, we felt that the difference in effort of converting a single branch, as compared to multiple branches or even all of our branches at a single time, was minimal.

One drawback though, which we considered very seriously before implementation, was that the failure of operations in one branch would mean that the failure would also occur at all other branches in the network. However, since the software had been centrally deployed, configured and implemented, we felt that the probability of successfully implementing the approach was high.

What are some of the lessons that you have learnt from taking the big-bang approach?
First, you need proper planning and coordination with all personnel involved in the project. Make sure the existing network infrastructure is capable of delivering all the services up to the branch level. User training is also a key factor in a big-bang approach as this ensures users adapt well to the new system.