X
More Topics

Advanced IT monitoring delivers predictive diagnostics focus to United Airlines

United Airlines used predictive diagnosis to better track thousands of systems and applications from its newly merged company. It also had to dig deeper and orchestrate an assortment of management elements to produce the right diagnostic focus.
Written by Dana Gardner, Contributor

The next edition of the HP Discover Performance Podcast Series explores how United Airlines demanded better performance and monitoring from IT — and got it.

We'll see how United not only had to better track thousands of systems and applications from its newly merged company, but also how it had to dig deeper and orchestrate an assortment of management elements to produce the right diagnostic focus, and thereby reduce outages from hours to mere minutes.

Learn more about how United has gained predictive monitoring and more effective and efficient IT performance problem solving from our guest, Kevin Tucker, managing director of Platform Engineering at United Airlines. The interview was moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Below are some excerpts.

You had major companies coming together with Continental and United. Help us understand the challenge that you faced as you tried to make things work together and your performance improve.

Tucker
Tucker
Image: LinkedIn

The airline industry is one of the most complex IT environments in existence. I think it's really difficult for the average flyer to understand all of the logistics that have to happen to get a flight off the ground.

There are millions of messages moving through the system, from weight and balance to reservation changes. There's the Network Operations Center (NOC) that has to make sure that we're on time with slots. There are fuel concerns. We have to ensure that with all of the connections that are happening out there that, the flights that feed into our hubs carrying our passengers get in on time, so that folks can make their connecting flights.

Moving people around is a very serious business. I have had people say, "Why do you guys take it so seriously? You're not launching nukes, or you are not curing cancer." But at the end of the day, people are counting on us to get them from point A to B.

That might be the CEO that’s trying to go out and close a big business deal. It might be someone trying to get to see an ailing family member, or someone who's lined up for what could be a life-changing interview. It's our job to get them there on time, in a stress-free manner, and reliably.

Complex environments

We've had a very challenging last couple of years. We recently took two large, complex IT environments and merged them. We picked some applications from Continental, some applications from United, and we had to make these applications interface with each other when they were originally never designed to do so. In the process, we had to scale many of these systems up, and we did that at an incredible pace.

Over and above that, with the complex challenges of merging the two IT systems, we had this phenomenon that's building in the environment that can't be denied, and that's the explosion of mobile. So it was really a perfect storm for us.

We were trying to integrate the systems, as well as stay out in front of our customer demands with respect to mobile and self-service. It became a daunting challenge, and it became very apparent to us going in, that we needed good vital signs for us to be able to survive, for us to be able to deliver that quality of service our customers come to expect from us.

From my perspective, I have several customer sets. I have the executives. We don’t really know how we're doing if we can't measure something. So we need to be able to provide them metrics, so that they understood how we were running IT.

I have the United employees, and that could be the line mechanic, to the gate agent, to the lobby agent. And then we have our flyers. And all of those people deserve reliable data and systems that are available at all times. So when you factor all of that in, we knew we needed good vital signs, so that we could ensure these applications were functioning as designed.

We didn't get there as fast as we would like. It was quite a feat to integrate these systems and we landed on a collapsed Passenger Service System (PSS) system back in March of 2012. Unfortunately, given that we were a little late to the game, we had some tough days, but we rallied. We brought HP to the table and told them that we don't want to be average. We want to be world-class.

We created a battle plan. We got the troops energized. We deployed the power that's available to us within the HP Management Suite. We formed a battle plan and we executed that.

It wasn't without challenge, but we are very proud of the work that we've done in a very short period of time. Within an eight-month journey, we have gone from being average at best, to I think one of the best around, with the stuff we have gotten after.

So it can be done. It just takes discipline, commitment, and a will to be the best. I'm very proud of the team and what they've accomplished.

Using all the tools

Kevin, I like the way you refer to this as "vital signs." When you put in place the tools, the ability to get diagnostics, when you had that information at your fingertips, what did you see? Was it a fire hose or a balanced scorecard? What did you get and what did you need to do in order to make it more actionable?

HP
Image: HP

We own quite a bit of the HP product set. We decided that in order to be great, we need to use all of the tools on our tool belt. So we had a methodical approach. We started with getting the infrastructure covered. We did that through making sure SiteScope was watching servers for health. We made sure the storage was monitored, the databases are monitored, the middleware components, the messaging queues, etc., as well as all of the network infrastructure.

What really started to shine the light on how we were performing out there, as we started rolling all of those events up and correlating them into BSM, was that we were able to understand what impact we were having throughout the environment, because we understood the topology-based event correlation. That was sort of the first model we went at.

You mentioned diagnostics. We started deploying that very aggressively. We have diagnostics deployed on every one of our Java application servers. We also have deployed diagnostics on our .NET applications.

What that has done for us is that we were able to proactively get in front of some of these issues. When we first started dabbling in diagnostics, it was more of a forensics type activity. We would use that after we were in an incident. Now we use diagnostics to actually proactively prevent incidents.

We're watching for memory utilization, database connection counts, and time spent in garbage collection, etc. Those actually fire alerts that weave their way through BSM. They cut a Service Manager ticket, and we have automation that picks that Service Manager ticket up, assumes ownership, goes out and does remediation, and refreshes the monitor. When that’s successful, we close the ticket out, all the while updating the Service Manager ticket to ensure we're ITIL compliant.

In many cases we have gotten many of those restores down in under five minutes, where before it was way north of an hour.

Through the use of these tools, we have certainly gained better insight into how our applications are using database connections and how much time we're spending in garbage collection. It really helps us tune, tweak, and size the environments in a much more predictive fashion versus more of a guess. So that's been invaluable to us.

You're probably picking up on a theme that's largely operationally based. We've begun making pretty good inroads into DevOps, and that's very important for us. We're deploying these agents and these monitors all the way back in the development lifecycle. They follow applications from dev to stage, so that when we get to prod, the monitors we know are solid. Application teams are able to address performance issues in development.

These tools have really aided the development teams that are participating in the DevOps space with us.

Clear winner

Is there something about the ability for HP to span across these activities that led you to choose them and how did you decide on them versus some of the other alternatives?

When we merged and got through the big integration I spoke of last year, clearly, we were two companies. We had two products. It became very clear to us without a doubt that because HP's depth and width that they could provide us across stacks and within those stacks, being able to go up and down, they were the clear winner.

Then when you start further looking at, well, why are we reinventing the wheel once something gets to production. When you look at the LoadRunner scripts, VuGen scripts that are created back in the development and the quality assurance (QA) cycle. Well, those are your production monitors and it prevents us from having to perform double work, if you will.

That's a huge benefit that we see in the suite. When you couple that with the diagnostic type information I referred to, that's giving our development teams great insight way back in the development cycle. As you look at the full lifecycle, the HP toolset allows you to span development stage into production and provide a set of dashboards that allow for the developers to understand how their sets of service are running.

We were very quickly able to bring them on board, because at the end of the day, there's the human factor that sets in. What's in it for me? I hear you ops and engineering guys telling me we need to monitor your application, but when you peel it back, I'm harkening back to my days when I used to run software.

Developers are busy and when you show them value that the director of the middleware services or business services has a dashboard, he can go look at how his services are performing. They very quickly identify that value and they're very keen on not getting those calls at 3 o’clock in the morning.

It's a slam-dunk for us, and as I say, there was no doubt in our mind as we started down our journey that the HP toolset just couldn't be rivaled in that space.

We're very proud of our accomplishment. We're living proof. We're in a complex, fast-moving industry. We were starting from much further behind than we would have liked to, and we bought off and believed in the tools. We used them partnering with HP and we were able to come a long way.

What really started moving the dial for us with respect to remediation time and lowering mean time to restore (MTTR) and drastically improving our availability is the use of diagnostics. It's automated restores for things that we can. We can't restore everything automatically, but if we can take the noise away so our operations teams can focus on the tough stuff, that's what it's all about with the BSM TBEC (Topology Based Event Correlation) views, the event-based correlation.

Before, as we were making our journey, we started very quickly getting good at identifying an issue before the customer called in. That was not always the case. And that's step one. You never want a customer calling in and saying, "I want to let you know your application is down," and you say, "Thank you very much. We'll take a look at that."

Very difficult

That shaves a few minutes, but honestly then the Easter egg hunt starts. Is it a server, a network, a switch, the SAN, a database, or the application? So you start getting all of these people on the phone, and they start sifting through logs and trying to understand what this alert means with respect to the problem at hand. It's very difficult when you have thousands of servers and north of a thousand applications spread across five datacenters. It's just very difficult.

Through the use of correlated views, understanding the dependencies, and the item within the infrastructure that's causing the problem turning red and bubbling up to the other applications that are impacted, allows us to zero in and fix that issue right off the bat, versus losing an hour of getting people on checking things to figure out is it them or is it not.

So through automating what can be automated with restore and having the event-based correlation that is what caused our operational performance we go from what I would call maybe a D- to an A+.

As we've matured and have insight into our environment with metrics, we're able to stop the firefighting mode. It's allowed us to step back and start working on engineering with respect to how we utilize our assets. With all of this data, we now understand how the servers are running. We understand, through getting engaged early in DevOps with some of the rich information we get through load testing and whatnot, we're able to size our environments better.

As part of that, it gives us flexibility with respect to where we place some of these applications, because now we're working with scientific data, versus gut feel and emotion. That's enabling us to build a new datacenter and, as part of that, we're definitely looking at increasing our geographic disbursement.

The biggest benefit we're seeing, now that we have gotten to more or less a stable operation, is that we're able to focus in on the engineering and strategically look at what our datacenter of the future looks like. As part of that, we're making a heavy investment in cloud, private right now.

We may look at bursting some stuff to the public side, but right now, we're focused on an internal cloud. For us, cloud means automated server build, self-service, a robot that’s building the environment so that the human error is taken out. When that server comes online, it's an asset manager, it's got monitors in place, and it was built the same way.

Now that we are moving out of the firefighting mode and more into the strategic and engineering mode, that's definitely paying big dividends for us.

Disclosure: HP is a sponsor of BriefingsDirect podcasts.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

Related stories:

Editorial standards