A long list of suppliers are flogging the concept of predictive analytics, that is the use of machine intelligence to sift through the massive, ever growing amount of log data operational systems generate, to predict problems before they occur so they can be resolved before the IT house of cards falls down. Is this a tool organizations can rely on or just the most recent catch phrase.
If what I've heard from suppliers, such as BMC, CA, ExtraHop, HP, IBM, Netscout, Netuitive, New Relic, Opnet, Prelert, Zenoss and a number of others, is true, this technology has moved from the computer science lab and into the data center.
What is "predictive analytics anyway?
Predictive analytics for performance management is the use of machine intelligence, machine earning, modeling, statistics, data mining to sift through the mounds of log and performance data created by operational systems to learn how systems work together and what anomalies appear immediately before workload slow downs or outages in order to predict the next problem before it occurs.
This technology isn't really new. It has been used in financial services for credit scoring as well as evaluating loan applications to create a ranking that can be used to predict whether payments will be made on a timely basis. It has also been used in evaluating behavior of populations to determine likelihood of health problems.
Can the use of predictive analytics help IT administration?
If the suppliers' customer stories are to be believed, this technology, if properly configured and used, can detect issues with application performance from an end user's perspective as well as performance of applications, databases, networks and storage as seen in the data center. Each of the suppliers mentioned above have presented many success stories and why their products are uniquely qualified to help organizations of all sizes.
Things to consider
Although not a complete list, here are a few suggestions for those considering the use of this technology that has come from an analysis of hours of briefings, painful slogging through user success stories and some common sense:
- Before adopting a product, learn whether the software can automatically detect and learn about the systems, the storage, the networking media, the operating systems, the application frameworks, the database engines, and the applications in use in the organization today and in the projected future. Don't forget to find out if the tool has the ability to deal with the newest generation of intelligent clients that staff members are likely bring in from home.
- Try out the product or see a demo to see how complex and time consuming installation, configuration, testing and actually using the software is. If it is more of a problem to use the software that just dealing with problems as they come up, the software isn't likely to get used.
- Test out the machine learning aspect of the tools. Does it really find everything in use in the IT infrastructure? Is it really able to suggest needed changes? Can it actually manage workload components or just it just provide alerts and reporting?
- Consider the pricing and licensing model of the product offering? Is it likely to be cost effective in your own environment?
In my view, none of the product offerings is perfect for all uses and environments. Each, however, appears useful and helpful at times. Make suppliers prove their claims by providing actual customer references.
Furthermore, I'd suggest that you call those references to learn more about the product. Be sure to ask where the product fails to meet expectations and what the reference has done to work around the product limitations.
It is also valuable to find out if the supplier's service and support is helpful and friendly. I've heard of cases in which the reference thinks the product is the best thing since pockets, far better than sliced bread or canned beer, but, getting answers about product configuration or issue is difficult.
Is it time for predictive analytics? The answer appears to be "yes."