After reading a recent post, Zenoss CMO Chris Smith reached out to to suggest that "legacy solutions depend on vendor lock-in in order to survive as their only sources of revenue are driven from rising maintenance costs, professional services, auditing customers for license compliance, or bundling products within costly "new" suites.
Zenoss is a former Kusnetzky Group client and is always good for an interesting conversation.
Legacy, isn't that a good thing?
Vendors often discuss the products that are currently installed in an organization's data center as "legacy." They appear to define that term as "the products we hope to displace with our products." They use it as a pejorative in the hopes that a busy IT decision maker will drop what he's currently doing and embrace the vendor's own products.
Since these tools are already in the data center, IT staff have developed experience with them and they are supporting the business. That's a good thing right?
The golden rules of IT
Vendors who play the "legacy" card often don't appear to understand the Golden Rules of IT (for more information about these rules please read Reprise of the Golden Rules of IT). The key rules I'm thinking about are rules one and two.
- If it's not broken, don't fix it. Most organizations simply don't have the time, the resources or the funds to re-implement things that are currently working.
- Don't touch it, you'll break it. Most organizations of any size are using a complex mix of systems that were developed over several decades. Changing working systems that are based upon older technologies, older architectures, and older methodologies has to be done very carefully if the intended results — and only the intended results — are to be achieved.
Tools built for monolithic applications can't manage today's world
Zenoss asserts that the currently installed performance monitoring and management tools were designed at a time when monolithic applications executed on an organization's mainframes or UNIX-based midrange machines.
They are hopelessly broken when faced with the task of monitoring and managing today's service-oriented, multi-tier, multi-site applications. Batch performance management tools, Zenoss states, aren't fast enough to deal with today's rapidly moving computing environments.
Smith was quick to point out that Zenoss was built from the ground up to live in a web-based or cloud-based world.
Zenoss has a point that the world has moved on from the days in which an application or entire workload lived on a single machine. Most of today's applications are built as a series of general-purpose services that have been lashed together to address a single business requirement.
Monitoring just the mainframe or the UNIX midrange systems that are supporting some of those services just isn't good enough any more.
IBM has proven that the mainframe is still relevant in this new world. The suppliers of UNIX-based midrange systems, such as Fujitsu, HP, and IBM, point out that their systems are still at the heart of enterprise applications.
The suppliers of Linux-based midrange systems, such as Dell, HP, and IBM would chime in to say that their products are supporting many web- and cloud-based enterprise applications. Don't leave out Microsoft either. If we add a constellation of desktop, laptop, and handheld client devices, we prove the point that watching just the back-end systems won't capture what is really happening.
Zenoss has developed some powerful tools that can help, but then again there are many others banging the same drum. Bluestripe, Compuware, Hyperic, New Relic, OpTier are addressing similar needs with their products.
If your organization is facing challenges with performance and can't quite identify what is the base cause, tools from Zenoss would be worth a look. They have good answers, but aren't the only game in town.