ATO monitors service performance from the outside

They say too many cooks spoil the broth, but the customer service problem recently facing the Australian Taxation Office (ATO) was proving even harder to swallow. Despite the efforts of many outsourcers and developers to improve the performance of its online Business and Tax Agent Portals, call centres were fighting through long queues and ATO management sent a directive to the IT department to fix the problem -- fast.

They say too many cooks spoil the broth, but the customer service problem recently facing the Australian Taxation Office (ATO) was proving even harder to swallow. Despite the efforts of many outsourcers and developers to improve the performance of its online Business and Tax Agent Portals, call centres were fighting through long queues and ATO management sent a directive to the IT department to fix the problem -- fast.

A scramble to evaluate options brought the ATO to the doorstep of performance measurement vendor Mercury, whose Business Availability Centre monitoring tools quickly gave the ATO the insight it needed to dig down to the source of the problem. The issue: although the applications were ostensibly working fine, the combination of systems had exacerbated previously unnoticed bottlenecks.

"Because we have a lot of outsourcers engage in delivering services to us, we have a lot of teams providing services to our business folk," says Terry Green from the Mercury Application Management team within the ATO. "We were still very much of the mode of IT where we would think that if the service was fine, that meant I could use the server. But the business was getting impacted by its clients not being able to use their products and services".

Thus began Business Availability Center @ATO, a concerted effort to unite business and IT leaders within the ATO. The project involved not only the implementation of systems for automating the performance testing of the organisation's online services, but also saw the testing systems wrapped with a unified business structure aimed at preventing a recurrence of the call centre dramas.

Application testing is often focused on making sure a system is performing rapidly enough to keep up with scaling demand. The focus of BAC@ATO, however, was to both evaluate application performance but more importantly to ensure that the user experience of systems, including 33 production and 16 development servers running Siebel and internally built WebSphere J2EE and .NET applications, was within acceptable limits. This meant installing Mercury probes -- 24 internal and 16 external business process monitors, and 4 Real User Monitors (RUMs) -- at various points around the network.

"We never ever talk about infrastructure," Green says. "We're finding that as the business is telling us about the processes that are important to them, some of the application people may not have the level of detail around that business process that we would like. Sometimes we have the business people saying "these are the most critical transactions that clients can use", but production will show that they're not".

This disparity is addressed by software probes, placed on the systems providing those functions, that continuously monitor actual performance. RUM, in particular, tracks actual user activity to tell the business leaders just how well the systems are performing in the real world. A departure from traditional machine-generated user sessions, RUM data provides a more accurate data set based on the real activities of real users.

The most significant value of the system, however, is that it has been implemented and managed with the ongoing involvement of business leaders throughout the ATO. Rather than simply validating a software design according to arbitrary performance requirements, the application testing staff hold regular meetings with business leaders -- around 30 people attend every week -- to review performance and tweak the application strategy.

"In the past, we would have critical situations and have 20 people in the room giving opinions on what the problem might be," says Green. "With the BAC, all the data comes back to one central source within our internal LAN, and we have a source of information that everyone can bring up visually. That lets us make decisions based on what the system is telling us, rather than what we think the system is telling us".

Over the next year, the ATO will extend the BAC environment further across its systems, with planned integration between BAC and performance monitoring systems. A planned update to RUM will provide even more visibility of user interactions with the systems, and the team will also focus on matching the BAC@ATO initiative with internal governance objectives.

The most important benefit from the initiative: after taking a step back to look at the overall experience it's providing, the ATO has been able to drive improvements in areas of the business and IT that it didn't even know were problems.

"Because what we're doing here is something we haven't done before, we're seeing some new things - even from our security teams -- on how they're adjusting to meet the new business need of today," says Green. "We believe we will get better and understand the client experience more, which means we'll be able to focus more on the critical levels."

Newsletters

You have been successfully signed up. To sign up for more newsletters or to manage your account, visit the Newsletter Subscription Center.
See All
See All