The Hidden Challenges in Integration

During the past few years, many organizations have worked to improve their integration technologies and techniques. However, in some cases, these efforts have not created the flexibility that organizations desire.
Written by Daniel Sholler, Contributor

During the past few years, many organizations have worked to improve their integration technologies and techniques. However, in some cases, these efforts have not created the flexibility that organizations desire. The single largest factor causing these efforts to fall short is knowledge transfer. Focusing on resolving issues related to knowledge transfer will lead to greater productivity and ease of integration.

META Trend: By 2004, Web services technology and service-oriented architectures will promote construction of reusable components. Through 2005/06, the complexity of componentized, distributed applications will spur creation and adoption of pattern-driven development frameworks. By 2007, model-driven approaches will represent a significant minority of development efforts.

Organizations have struggled for years to improve methods of integration. Many organizations have adopted various EAI technologies (messaging, integration servers, and even process automation) in an attempt to resolve such problems. Although these technologies can simplify and improve much about the way integration is performed within the organization, significant design, organizational, and management issues contribute to the inflexibility and high costs of integration.

By 2004, more than 80% of Global 2000 organizations will be using EAI technologies for integration in their companies. Through 2005/06, these organizations will struggle to achieve their goals for integration because of the challenges inherent in changing design and organizational practice. By 2007, service-oriented concepts will be widely adopted by corporations, and organizational structures and practices will have been realigned around these principles.

The biggest problem with integration often goes unrecognized. Few organizations using advanced integration techniques have clearly identified this problem and almost none have made it a priority, yet it is the single largest barrier to change in most integration situations knowledge transfer. Most legacy integration techniques assume that the creator of the new integration is capable of learning and retaining a tremendous amount of detail information about the other applications, systems, and business processes over time. During the initial implementation, this requirement causes the construction of the integration to be a research-intensive affair. With time, this problem causes chronic errors and delays, since the organization only learned this detail superficially to execute the integration and often forgets it.

Examples of this problem abound. A typical corporate finance system (i.e., all of accounting [GL/AP/AR] and finance [treasury, expenses, consolidation, etc.]) will require hundreds of separate connections to other applications in a typical G2000-size organization. Although it may be possible for a team that is setting up the initial development of such integration to tackle each interface (adapter) one at a time, investigate its behavior, and design a specific interaction based on the technical and functional needs of the other applications, this is slow and costly. Once these interfaces have been established, the team moves on to other tasks, and the detailed knowledge of how the adapters work is forgotten. Relearning this knowledge every time a change is made is one of the major causes of inflexibility in change integration. Even when these interfaces are based on prebuilt adapters from an EAI vendor’s library, are implemented on top of the vendor’s messaging, and use the vendor’s graphical mapping tools and visual process modelers, it can still be difficult. Solving the problem requires a different design and organizational approach.

To solve this problem, organizations need to adopt an architectural principle that addresses this issue and use that principle to drive their decision making. The principle is that information about functions and their technical implementation should be localized with its owners. This sounds simple, but it demands a fundamental new role when it comes to integration the owner of the interface.

In most current models, it is the consumer of information that owns the interface. Typically, if a team has a project that needs to access some existing application’s functions and data, that team is the owner of the budget, so it defines the interface into the application providing the information. It must implement it and maintain it over time. This approach leads to a multiplicity of interfaces based on different technologies and design patterns, and spreads the knowledge of how best to develop interfaces to a particular application among the consumers of those interfaces. Since it is unknown who the next consumer will be, in all likelihood the knowledge will have to be relearned by the next project team.

Instead, the organization must challenge the groups that support these applications to treat them in a service-oriented manner. Instead of having the consumer of their functions or data drive the development, there must be a negotiated contract between the producers and the consumers. This contract, which is represented by a publicly available interface, defines how the consumers and producers interact. The contracted interface is independent of both the provider’s and the consumer’s technologies, and has its own life cycle and governance. One can usefully think of this negotiated interface as being the interface into a new virtual or composite application that is the sum of the applications being integrated. Obviously, no single subsystem should “own” the interface of the composite system that contains the subsystem.

Although in some cases the interface will be similar to a private API set or other construct in the major provider subsystem (e.g., SAP), logically it is not under the control of that provider. This interface should be independent of the technical details of the provider application, so no knowledge of APIs, invocation sequences, or technology details should be necessary. Likewise, the business semantics (i.e., business-relevant objects and functions such as create new customer, update claim status, move inventory, etc.) of the service interface should be independent of the details of the provider application. This service-oriented approach has many other benefits besides enabling localized knowledge, but it will enable the groups that support various functions to manage the details of those functions and expose only that which is functionally useful to the outside world.

Accordingly, the provider applications should be responsible for mapping their private domain constructs to the contracted service interface, and the consumer should do likewise. This approach has led to a number of direct cost savings, particularly when the provider is a legacy mainframe application. In many instances, these applications have accumulated a large number of middleware components to provide access. By consolidating the techniques used to map to the public interface, some of these middleware components become redundant, thereby eliminating license costs and skills.

Clearly, taking this approach will be an incremental, bottom-up affair. No organization can afford to fund a project to create services for the sake of having services. This means a process must be put in place to ensure that the interfaces are, in fact, hiding the technical and business detail and keeping the knowledge where it belongs. Such interfaces will evolve over time as new use cases are found for them. This is the service governance process.

Implicit in this approach are several other changes to the development of integration. One is to use a shared-services organization to implement the integration technology and manage the shared interface “contracts.” This organization is responsible for the definitions (semantics and syntax) of the service interfaces or, at the very least, for the governance process (previously mentioned) that negotiates such definitions.

The other major change is the manner in which integration is funded. A mechanism needs to be created where the organizations are funded and incented to create and consume these services. Such funding mechanisms can vary widely but are mostly based on the idea that all projects have integration and, therefore, all projects’ integration components will be judged on how well they leverage existing knowledgebases. At the planning stage of the project, it is a requirement to include the groups that will be called on to create or update service definitions and to ensure that a portion of the project budget is allocated for their time. Siloed management structures can make this difficult, and usually this model is achieved much more easily when the project teams are multidisciplinary.

Business Impact: Service-oriented architectures represent a change in approach that, over time, will improve the productivity of the IT organization.

Bottom Line: Organizations must focus on partitioning work based on domain expertise and eliminate barriers to sharing and constructing reusable assets.

META Group originally published this article on 28 August 2003.

Editorial standards