X
Business

Instrumenting the enterprise

Looking back on 2003, you could say it was a year of many overlapping trends, which we faithfully chronicled in our daily Tech Update newsletter. As I speculated last month, those trends-- offshore outsourcing, Web services, wireless, virtualization, utility computing, open source, and VoIP among others—will continue to dominate the headlines and capture enterprise budgets this year.
Written by Dan Farber, Inactive

Looking back on 2003, you could say it was a year of many overlapping trends, which we faithfully chronicled in our daily Tech Update newsletter. As I speculated last month, those trends-- offshore outsourcing, Web services, wireless, virtualization, utility computing, open source, and VoIP among others—will continue to dominate the headlines and capture enterprise budgets this year. The major unifying objectives are reducing cost and increasing the ROI on IT expenditures. Those are worthy goals, but don’t capture what I predict will be an important theme this year: instrumenting IT.

We have talked for years about “architecting” IT, creating blueprints for building IT infrastructure, business processes and services. Instrumenting is defined as “the arrangement or orchestration resulting from such practice,” related to its musical roots. In the IT world, instrumenting refers to the practice of monitoring and managing the behavior of the “instruments” of IT infrastructure—at multiple levels--to allow for more efficient administration and orchestration.

The idea of instrumentation is to build (architect) within the software and hardware the rules or acceptable behaviors and the capability to distribute vital information about the “state” of an instrument. This applies to the underlying program code as well as data, such as correlating identity and access management data.

Enterprises are already instrumenting parts of data centers, security infrastructure and other domains. Companies have data coming in from a variety of hardware devices, such as security appliances, system monitors and applications. We are moving from isolated and primitive instruments in an increasingly complex distributed computing environment to more comprehensive instrumentation, analogous to how a patient in an intensive care unit is instrumented. The patient is hooked up to many monitoring and therapy delivery devices, and the data correlated to provide an accurate diagnosis and invoke appropriate remedies.

Several companies today provide tools for instrumenting applications as part of systems management. For example, IBM’s Tivoli Global Enterprise Manager provides a way to instrumentation—monitor and control—applications. Business activity monitoring (BAM), for example, uses the output of instrumented applications to provide fodder for managing business processes. BAM tracks events from a specified domain, such as a network of ATM machines to check for anomalous behavior. A next phase will be extending BAM to EAM (enterprise activity monitoring), which connects business processes into the broader context of the enterprise infrastructure. This kind of instrumentation—through a variety of tools such as integration brokers and systems management or more integrated suites--provides a more comprehensive view of inner workings of critical business functions. However, BAM is still more an afterthought rather than a function built from ground up into the business process.

The practice of combining architecting and instrumenting IT throughout an enterprise is still in an embryonic stage. Most instrumentation beyond monitoring events and feeding them to a console occurs ex post facto. In many cases, instrumentation is isolated to some components, such as servers and databases, or they must be retrofitted. Instrumented applications from one vendor may not work well with those from other vendors. As a result, the cost of Instrumenting is attached to the high cost of integration services. In addition, most instrumentation is focused on monitoring, rather than on taking dynamic actions based on a set of inputs and rules.

Utility computing connection
Efforts by the major platform vendors to bring utility computing (on demand, adaptive enterprise, etc.) to the masses are pushing the instrumentation envelope. Pay-as-you-go server and storage capacity, for example, requires instrumentation to monitor and allocate resources based on a service level agreement.

Microsoft is tackling this problem holistically with its System Definition Model (SDM), a schema for creating “operationally aware” applications, using XML as the underlying technology. Rather than instrument applications as an afterthought, the idea is for developers to instrument applications, with both resource requirements and system monitoring, at the beginning of the process.

As described by Microsoft, an SDM description would enable the operating system to deploy an application automatically, including the allocation server, storage, and networking resource. Based on the description, or instrumentation, the SDM can adjust resource allocation during operation. The instrumentation also provides a foundation for detecting problems with an application or set of business processes and for triggering self-healing functions.

Developing end-to-end SDM support within the Windows family will take another 5 to 10 years, but Microsoft will deliver incremental changes as new versions of the server and operating system are hatched.

Other development platforms are also working on similar kinds of functionality. Sun’s experimental project, JFluid, is focused on creating profiling of Java applications by “instrumenting” the code so that it can change application behavior dynamically. IBM, among others, has focused research on autonomic computing, which the company defines as “self-managed computing systems with a minimum of human interference.” The products are variously described as on self-configuring, healing, optimizing and protecting.

Like on demand or autonomic computing, instrumenting the enterprise is a way to bring more automation and flexibility into the IT equation. Standards bodies and vendors will come up with instrumentation standards that make orchestrating IT instrumentation easier to implement. However, without making instrumenting at multiple levels and across the extended enterprise integral to the IT architecture, you’ll be grounded while more intrepid competitors are still flying high.

You can write to me at dan.farber@cnet.com. If you're looking for my commentaries on other IT topics, check the archives.

Editorial standards