X
Business

Differences between approaches to APM - a chat with Jesse Rothstein of Extrahop

There are three basic approaches to application performance management. Jesse Rothstein, CEO and co-founder of ExtraHop, presents his view of which is the best.
Written by Dan Kusnetzky, Contributor

ExtraHop logoAfter reading a Virtually Speaking piece on one of many competitors in the application performance management (APM) market, Jesse Rothstein, CEO and co-founder of ExtraHop, reached out to arrange a conversation about the rapidly growing area.

Since I've spoken to a number of ExtraHop's competitors over the last year, I thought it would be highly illuminating to speak with Jesse.  Here is a quick summary of what was a very interesting rambling hour that included review of management technology, the benefits of each approach, and a few side journeys into computer archeology.

There are three traditional approaches to management

There are three basic approaches to collecting, storing, analyzing and presenting operational data on operating systems, networking, storage, database engines, application frameworks and applications, all of which are critical components of  modern applications. It used to be relatively straightforward to examine the health and performance of these application components because they were all hosted together on the same mainframe or midrange system.

Distributed, virtual and cloud-based applications pose a challenge

The key challenge, Jesse pointe out, is that these components are no longer hosted on a single machine as in times past. Each function is now likely to have been architected as an internet service; be running on multiple systems to increase levels of scaleability, reliability and performance; and in today's virtualized world, very likely to be moving from one system to another to meet service level objectives and deal with momentary outages.

Here is a quick summary of the three approaches we discussed:

Building management in

Instrument everything so that internal operational data is available when management tools ask. While this approach offers the most fine grain collection and presentation of data, this approach can impose a great deal of management overhead and slow down production systems.

It also may not effectively deal with multi-tier distributed applications or applications that operate in a cloud computing environment. Furthermore, developers and technology suppliers often have chosen not to collect this data.

It is wise, however, for an application performance management tool to know what data is available and use it.

Agents everywhere

Install agents to collect data on every single componet and have those agents send data to a central collection point for later analysis is another approach that is highly touted.

The truth is that installing agents on every client (PC, Laptop, Smartphone, Tablet), server (physical, virtual or cloud-based), network device, storage device, database engine, application framework and application componet isn't really practical. Persuading staff and customers to install agent software on their client devices is quite problematic. Installing them without consent on customer systems could get the organization into a great deal of hot water.

Furthermore, agents can only see local operational data. Unless the supplier has developed techniques to capture data from networking and storage systems, it is likely that critical pieces of the performance puzzle will not be uncovered.

As before, it is wise, however, for application performance management tools to be aware of these agents and use the data they've collected if it is available.

Follow the network traffic

Jesse believes that the best approach is the follow the network traffic and learn who is talking to who, what is talking to what, where components are located, how those components are moving throughout the network and then apply sophisticated analysis to that traffic data.

This approach inserts only a tiny amount of overhead into production systems. Network packets would only have to experience a single additional network hop and a wealth of operational information could be collected.

If the performance management tool was well designed and had built in knowledge of all of the popular operating systems, virtualization tools, database engines, application frameworks, networking and storage components and commercial application components, there would be no need for customers to configure the system. If it were plugged into the network and given a chance to watch the network traffic, it would learn what was needed.

Another great point is that applications or their components could be running on mainframes, midrange systems, industry standard systems, PCs, Macs, Smartphones, Tablets or even instrumented coffee makers, operational data would be collected.

One key challenge to this approach is developing a system that can scan through network traffic fast enough to deal with today's high volume, low latency networking environments. Another challenge is learning enough about every popular application, framework, networking component, etc. to build a complete picture of what's happening.

Snapshot analysis

Jesse's analysis covered all of the approaches other suppliers have discussed and pinpointed the shortcomings I had already observed with those approaches. This makes me think that ExtraHop's products, products based upon this type of insight, are likely to be winners.

Editorial standards