As organizations begin to deploy Web services, it is becoming increasingly apparent that they must be managed in an organized way. However, the principles of managing Web services (and the technologies to support those principles) are in the early stages of maturity.
META Trend: By 2004, Web services technology and service-oriented architectures will promote construction of reusable components. Through 2005/06, the complexity of componentized, distributed applications will spur creation and adoption of pattern-driven development frameworks. By 2007, model-driven approaches will represent a significant minority of development efforts.
Understanding what it means to manage Web services is a challenging task. Although there is some agreement about basic manageability characteristics, the use cases for Web services are relatively undefined, leading to uncertainty about how to manage them and what goals are being achieved. Current management technologies are all very new, and even those that are extensions of existing performance monitoring and management suites are still exploring the boundaries of manageability.
Managing Web services will remain an amorphous task through 2003. By 2004, initial products targeted at existing use cases will begin to coalesce around a standard functionality levels. This is likely to produce two distinct classes of management tools, some focused on monitoring and alerting, with limited control functions, and others focused on control, with additional monitoring capabilities. During 2005, this split will be rectified, as leading vendors in the performance management space acquire both monitoring- and control-type technology companies to shore up their offerings. This consolidation will continue through 2007, when the market will be dominated by major management players (HP, CA, Mercury, Precise, BMC Software, et al.) and infrastructure players (IBM and Microsoft).
Web services manageability requirements are quickly becoming a major challenge for leading organizations. As Web services begin to be used in a wide variety of circumstances, the potential impacts of an unmanaged situation are becoming clear. In many ways, the current situation with Web services is analogous to the situation with Web sites in 1997. During that time, virtually every type of technology became capable of deploying an HTML interface to a Web site. The result was a profusion of unmanaged sites, with uneven quality control and service levels, as well as exorbitant costs. Most organizations rapidly consolidated the infrastructure and control of those Web sites and provisioned them with management tools as well as consistent platforms and architectures to facilitate the management of those sites. A similar situation is likely to occur with Web services in a large number of organizations. Where services are allowed to proliferate without regard to manageability, the consequences will be a set of unreliable services. (These services will then need to be consolidated on consistent delivery platforms (running on consistent infrastructure) to ensure that they can deliver appropriate service levels. The challenge for Web services management technologies is separating the various aspects of service management from those of the creation of Web services.
Several functions of a Web services management platform are clear and relatively undisputed. The platform should be able to define and monitor basic quality of service for the various services it controls. These quality-of-service metrics would be similar to those applicable for other types of network transactions (throughput, response times, availability, error states). Although the exact parameters are not entirely clear, it seems reasonable to assume that this job is a requirement for a management tool These metrics will likely need to be published by the services themselves, through a standard instrumentation application programming interface. The lack of clarity comes from the application of these traditional measures to interfaces with Web services characteristics. For example, what exactly is meant by availability? Is it availability of the SOAP listener for that service to accept the request? Or is availability defined by the service response? If the former, that may overstate the availability (because the listener could be available while the actors that perform the logical service functions are not) or if defined by the service response, so coarse-grained as to be meaningless. (If the measure says that the service returned a valid response to its last caller, what does this mean for a service that is expected to run over the course of several days? Its last caller that got a response may have called it two weeks ago!)
It is likely that management tools will develop flexible rule definitions to apply different measurement styles to these requests, though this type of rules-based monitoring has not fared well in current markets. It is also clear that to track most of these statistics, the management platform must monitor the SOAP traffic that flows in and out of the managed services This should provide fertile ground for tracking interesting aggregate performance measures (average throughput, etc.) and allow them to be recalculated in real time, instead of having to use historical measures. The architecture necessary to do this exists in many of the released and early management tools (e.g., Amberpoint, Talking Blocks, Flamenco, Actional).
This architecture begins to raise a significant number of questions. If the management platform is an intermediary that needs to inspect all SOAP traffic, then why not have it perform other control-related functions on that traffic? This is where the definition of the functions of management platform becomes very unclear. Is security part of the management platform remit? What about composition of complex services? Load balancing? Life-cycle control? Although each of these functions is needed in the Web services environment, in most other contexts these control activities are not part of the management suite. However, in this case, it seems likely that the definition of management will be stretched to include these functions. Ultimately, a new taxonomy of tools and interfaces will be developed, focused on the various roles for whom the information is produced. Many current Web services management tools try to combine these views and include development task-oriented capabilities (composition), but confusion about the user of these technologies will make this approach untenable, and more tool segmentation of will result.
Differences also exist in the primary objectives of management and control when comparing internal and external use of Web services. For internally used Web services, the focus should be on monitoring and taking corrective action to account for service-level failures. These failures should be addressed both at the technology level (e.g., a service is taking too long to respond) as well as at the process level (an instance of a process is taking too long). The challenge is that a different set of attributes needs to be monitored in both cases; therefore, the corrective actions would be wildly different. For example, if a given service instance was not responding within its defined service level, then one would want to check the heartbeat of the producing application and its components. If they were all available, then one might want to see if the utilization had surpassed a throughput threshold and start a new instance of the application to handle the “spike in load.” In the business-relevant case, the corrective action is likely to be to initiate a new process that involves calling the customer to let him or her know there has been a delay, and investigating the causes of that delay. In one case, the actions are traditional IT control-type actions; in the other, they would involve complex workflow and application initiatives. When going outside the company, the set of critical functions is often somewhat different.
Vendors appear to be addressing these problems from two angles; a top-down and a bottom-up approach. The bottom-up approach is the one most familiar to operations staff and involves the augmentation of existing system monitoring and control tools with Web services capabilities. This approach leads to integrated tools that can easily handle situations like the aforementioned, but do not have any context that relates their metrics to the business activities within the running applications. The top-down approach assumes that Web services have been used to knit together end-to-end business processes and that those definitions are available to the tool. These tools attempt to tie everything to associated business-relevant metrics and to answer queries expressed in those terms.
Finally, it is likely that the different interaction patterns for Web services will also have different management requirements. Data services will in all likelihood be created by and executed from within databases. This means that these database-centered services must somehow participate in a management infrastructure that also encompasses services hand-coded into applications and running in the application environments, as well as those generated by enterprise application integration, packaged applications, and other services.
Business Impact: Managing Web services will be required for business integration investments to return value.
Bottom Line: Users deploying Web services must develop a management strategy and provision a toolset, but any current approach should be regarded as provisional as the correct allocation of management functions to components becomes clear.
META Group originally published this article on 14 February 2003.