Aternity-end user experience management

Discovering performance issues in a complex environment that includes distributed systems, virtual systems and cloud-based workloads is hard. Adding smartphones, tablets and other staff-owned devices to the mix makes it worse. Aternity believes it has the solution
Written by Dan Kusnetzky, Contributor

Discovering performance issues in a complex environment that includes distributed systems, virtual systems and cloud-based workloads is hard. Adding smartphones, tablets and other staff-owned devices to the mix makes it worse. Aternity believes it has the solution.

What is Aternity announcing?

Aternity recently launched Mobile Frontline Performance Intelligence (MFPI), a product designed to monitor of mobile applications delivered as virtual desktop solutions (XenApp), native iOS and Android applications and mobile web applications. Aternity offers a software development kit making it possible for developers to instrument their applications. Aternity also offers the ability to inject JavaScript code into Web pages making it possible for actual system and performance data to be collected.

The company points out that other performance management software is designed to monitor only part of the overall computing solution. These other products monitor server performance, database performance, application server performance and other support functions on the server. Aternity believes that it is the only company that has developed ways to fully instrument client devices, such as laptops, tablets, smartphones and just about any Web-enabled device.

Snapshot analysis

I agree that monitoring performance and being able to quickly isolate problems and fix them is increasingly challenging. Applications that used to be architected as a single, monolithic block of code that was hosted by a mainframe or midrange system are now distributed, multi-tier, complex collections of services.

It is true that finding out what is happening from moment to moment is the challenge many suppliers are addressing. I've spoken with at least 10 different suppliers who each think that they are uniquely qualified to address this problem.

The approaches fall into four categories.

Building management in

Instrument everything so that internal operational data is available when management tools ask. While this approach offers the most fine grain collection and presentation of data, this approach can impose a great deal of management overhead and slow down production systems.

It also may not effectively deal with multi-tier distributed applications or applications that operate in a cloud computing environment. Furthermore, developers and technology suppliers often have chosen not to collect this data.

It is wise, however, for an application performance management tool to know what data is available and use it.

Agents everywhere

Install agents to collect data on every single component and have those agents send data to a central collection point for later analysis is another approach that is highly touted.

The truth is that installing agents on every client (PC, Laptop, Smartphone, Tablet), server (physical, virtual or cloud-based), network device, storage device, database engine, application framework and application component isn't really practical. Persuading staff and customers to install agent software on their client devices is quite problematic. Installing them without consent on customer systems could get the organization into a great deal of hot water.

Furthermore, agents can only see local operational data. Unless the supplier has developed techniques to capture data from networking and storage systems, it is likely that critical pieces of the performance puzzle will not be uncovered.

As before, it is wise, however, for application performance management tools to be aware of these agents and use the data they've collected if it is available.

Follow the network traffic

A clever approach is to follow the network traffic and learn what components are talking with other components, where components are located, how those components are moving throughout the network and then apply sophisticated analysis to that traffic data.

This approach inserts only a tiny amount of overhead into production systems. Network packets would only have to experience a single additional network hop and a wealth of operational information could be collected.

If the performance management tool was well designed and had built in knowledge of all of the popular operating systems, virtualization tools, database engines, application frameworks, networking and storage components and commercial application components, there would be no need for customers to configure the system. If it were plugged into the network and given a chance to watch the network traffic, it would learn what was needed.

Applications or their components could be running on mainframes, midrange systems, industry standard systems, PCs, Macs, Smartphones, Tablets or even instrumented coffee makers, operational data would be collected.

One key challenge to this approach is developing a system that can scan through network traffic fast enough to deal with today's high volume, low latency networking environments. Another challenge is learning enough about every popular application, framework, networking component, etc. to build a complete picture of what's happening.

A mixture of the previous three

Each of the three previous approaches has merit at times. At other times, each approach demonstrates serious flaws, that is part of the action is missed. The most sophisticated application performance management products use a combination of all three of the previous approaches to gather data for analysis.

Quick Summary

Aternity's performance management portfolio offers a mix of functions that are designed to address quite a number of performance issues. It appears to me that each of the functions the company has developed would be quite usable and could address many, but not all, of the performance issues that could turn up in a complex multi-platform, multi-application, multi-site environment.

Offering a tool kit for developers means that custom applications can be instrumented. Unfortunately, only a small portion of the applications that are going to be found on PCs, Laptops, Smartphones, Tablets and other intelligent devices are likely to have been instrumented.

Injecting JavaScript instrumentation into Web pages would much of the time. Security experts, however, might have told an organization's staff to turn off that feature of their Browsers to eliminate or reduce the chance that malicious Websites could deliver worms or viruses via JavaScript.

Monitoring the server application components and network traffic would provide a useful solution for other applications.

Aternity is offering an interesting mix of capabilities. I'd suggest seeing a demonstration to learn if that mix of features would address your organization's environment.

Editorial standards