ExtraHop - application aware performance management

ExtraHop - application aware performance management

Summary: It is hard to disagree with the statement that organizations are dealing with a complex environment. ExtraHop believes that performance management tools should go beyond merely watching operational logs or instrumenting applications to being application aware. My question is how this differs from the approaches offered by others.

TOPICS: Data Centers

ExtraHop dropped by to let me know about their newest messaging in their quest to be the leading application and end user experience performance monitoring company. The company believes that APM and EUEM tools should cut across all of the tiers of a distributed, multi-tier application environment to gather operational data and should be built to understand how applications work. That capability, the company would say, would allow tools to be able to easily detect anomolies and allow IT administrators to prevent slowdowns and failures.

They believe that tools should be able to follow and understand the entire flow of communication between and among application components, operating systems, data managers, storage systems and network components. In their words, products should be able to do the following:

  • Be able to tell if there is communication going on at all. This is what a traditional network probe offers.
  • Understand who is talking with whom. This is what a network flow management tool offers.
  • Understand what words are part of the conversation. This is what a packet inspection tool offers.
  • Understand what words are typically used together. This is what a multi-packet inspection tool offers.
  • Understand both sides of the conversation. They're calling this full-stream assembly.
  • Understand what is being said. They're calling this full-content analysis.
  • Understand what the conversation means. They're calling this transaction analysis.

ExtraHop would pose that the best way to accomplish this is fully understanding the network dialog going on between systems and components rather than merely instrumenting individual pieces or inserting monitoring tools into application or virtualization managers.

While what ExtraHop has to say rings true to me, I've heard the same things said by many others. A few other competitors that are singing similar songs that come immediately to mind are eG Innovations, Netuitive and Prelert. I'm sure that if I did a bit of research on the subject, I'd come up with ten or fifteen other competitors.

Some of these competitors are trying to add "machine intelligence" or "predictive analytics" to the tools that are monitoring and managing complex distributed systems.

In the end, it appears that competitors in this market are all bringing very interesting approaches and technology to the market, but are having trouble articulating what they do without using the exact same language being used by others. ExtraHop faces this challenge.

So, an IT decision-maker is just going to have to do his/her own homework and speak with a number of competitors to decide which tool or combination of tools would best fit their own organization's requirements.

Note: after publication of this commentary, ExtraHop's representative contacted me. Her note included the following:

In particular, the deck we sent should demonstrate how we differentiate from competition by offering a comprehensive solution that meets all of the requirements that you listed. In this way, we are moving beyond the traditional APM landscape into the realm of true IT Operational Intelligence – a term, which really only one vendor has attached to itself, Splunk (also an ExtraHop partner). Slide 12 in particular lists the defining characteristics of IT Operational Intelligence, which are also the fundamental components of ExtraHop’s solution.

The company is going to schedule an additional discussion about the concept of "IT Operational Intelligence." I'll comment here after we have that discussion.

Topic: Data Centers


Daniel Kusnetzky, a reformed software engineer and product manager, founded Kusnetzky Group LLC in 2006. He's literally written the book on virtualization and often comments on cloud computing, mobility and systems software. In his spare time, he's also the managing partner of Lux Sonus LLC, an investment firm.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


1 comment
Log in or register to join the discussion
  • blinded by packets of data rather than information


    Hardware and software trends are running counter to this "network" centric viewpoint. QoS and other forms of optimization and control typically found at the network layer are moving up the stack into the middleware and application runtimes. We are seeing greater adoption of in-process parallel processing and in memory data grids in which there is practically no IO or network consumption. Awareness needs to move upwards and downwards.


    Observation and intelligence is now being embedded directly within the application runtime and library.


    It would be far better if instead we focused on how service suppliers in a complex cloud service supply chain can add instrumentation & measurement information to payloads without having to instrument network nodes which is not always possible in the cloud especially in a complex interaction.