X
Business

Hop on the enterprise service bus

It's no secret that one imperative for achieving maximum enterprise productivity is to remove the people who serve as the glue between the various systems involved in your business processes and replace them with technology -- integration technology that works not just between the systems behind your firewall, but also with the systems outside the firewall such as those of your partners, suppliers, and customers.
Written by David Berlind, Inactive
It's no secret that one imperative for achieving maximum enterprise productivity is to remove the people who serve as the glue between the various systems involved in your business processes and replace them with technology -- integration technology that works not just between the systems behind your firewall, but also with the systems outside the firewall such as those of your partners, suppliers, and customers.

For years, the integration technology has existed, but it has largely come in the form of hard-wiring and at an intolerable cost of ownership. A tiny upgrade to one end-point could result in a cavalcade of projects just to bring all of the other related systems in line. Eventually, half-acknowledging one of the biggest pain points of their customers and half-hoping to build the bridge over which they could take each other's customers, IBM and Microsoft aggressively pursued a way to integrate dissimilar middleware technologies and applications, such as IBM's Java-based WebSphere and Microsoft's .NET.

The result has been a blitzkrieg of abstracting APIs -- collectively known as Web services -- that can replace most, if not all, of that hard-wiring with an XML-based lingua franca that all systems can understand. Ultimately, if all systems speak the same language, one of an enterprise's biggest cost centers can be driven way down.

As the APIs evolved, grass-roots organised Web services plug-fests became the proving ground for showing how any software component could easily talk to any other software. Development tools such as Microsoft's Visual Studio .NET have evolved to not only produce service-oriented software, but to take existing software and, with relatively few steps, wrap it in an XML-ready, services-oriented layer. Inside, it was still the same old software, but on the outside, it looked like a Web service.

For those ready to embrace the idea, another issue remains. Just because you can get discreet software components and systems talking directly to each other doesn't mean you should. For transactional systems engaging in millions of transactions a day -- transactions of different types and priorities that need to integrate with different systems -- it's not as simple as just spitting those transactions out onto a TCP/IP network as fast as you can and crossing your fingers. Nor is it as simple as just turning the receiving system's ear to the network and hanging an "Open for XML requests" sign on the window. You can do both, but, depending on your situation, it could be an invitation to mayhem. Imagine thousands of unruly people storming the turnstiles at a soccer match. The aftermath isn't pretty. TCP/IP as a network simply doesn't have enough built-in facilities for managing the integrity of business-critical transactions that need to take place according to priority in a reliable, secure, and often a choreographed fashion.

As it turns out, taming such mayhem has nothing to do with XML or Web services. To the extent that people have been integrating systems for over a decade, usually via proprietary methods, it's been an issue that integrators have been dealing with, regardless of the data or networking protocols. To bring order to chaos, the answer has invariably been the insertion of message queues as a layer of abstraction between integrated systems.

Although the metaphor is a bit of an oversimplification, message queues such as IBM's WebSphereMQ do for integration what orderly queues do for getting spectators safely in and out of a soccer match, worrying about issues such as guaranteed delivery that protocols such as TCP and UDP can't map to the transactions that are taking place at a fairly high layer in the protocol stack.

In the spirit of cost reduction, message queues also have another benefit. In typical intermediary fashion, message queues allow the host systems (to which the queues are adjacent) to focus on spitting out transactions while the queue is the proxy that worries about the business logic of what to do with them, such as prioritisation and routing. From an ongoing maintenance point of view, such separation of duties reduces the complexity of having the transaction logic intermingled with business logic. The generally accepted rule of TCO is "less complexity, less cost".

But with nothing but message queues (and the systems they abstract), the potential for mayhem and complexity still exists. At the very least, directly connecting message queues to one another still represents a form of hardwiring that could benefit from some additional abstraction. For example, in order to connect message queues directly to other systems or message queues means that the queue might have to be reprogrammed every time systems get moved, changed, or replaced. If there are only two nodes in your integration matrix, as there might have been in the days where integration was so difficult that you wouldn't fathom manually wiring more than a couple of systems, this is no big deal.

As integration became more commonplace and more nodes were thrown into the mix, however, it became clear that the message queue didn't need the additional responsibility of connectivity. It's one thing to integrate, and it's another to make sure the entire system scales well. Distracting the host systems or the message queues with tasks that can be taken on by something else runs the risk of gumming up the works.

When a single transaction in your point-of-sale system results in interactions with your inventory system (is the product in stock?), your credit card processing service (is the card valid?), your ledger, your CRM system (do we have updated customer information?), and your shipper (multiple drop ship?), the last thing you want your message queue to have to worry about is where all those things are. Especially since, for whatever reasons, you might have other point-of-sale systems for other lines of business interacting with the same services. Rather than maintain all that connectivity data in the queues, one better way to do it is to maintain the routing information in a central location that can serve as yet another layer of abstraction.

Peter Linkin, senior director of product marketing for BEA's WebLogic put it to me this way: "What was needed was a system that just told us what the message was, what the contents were, where it should go, and what the quality of service for it should be. Then, that was handed off to the next level which is something like a central post office. The postmaster says "send me all your messages from all these outpoints, and I will intermediate. I'll make sure they get sent individually, and reliably to all the end points." It's a message broker that's driven by business process. The end points don't have to know about each other so there's ignorance at each end and the logic of the business process is in the plumbing."

If two point-of-sales systems need to reach your CRM system as a part of their business process, the message gets packaged and ready for sending to the CRM system, but no particular destination is specified. When that message pops off the top of the queue and reaches a broker that's like the equivalent of a central post office, the postmaster says, "Oh, I know where the CRM system is" and then ensures that the message is reliably delivered according to the prescribed quality of service.

The beauty of this broker-like design is that it operates very much like the Internet's DNS. If for example, you decide to switch your CRM system from Siebel to Salesforce.com, you don't have to reprogram all your queues. You just reprogram the central directory to say "the CRM system is over there."

Although the overall architecture has evolved to have more primary parts (the transactional system, the message queue, the central post office), the distribution of responsibility results in less complexity, as well as less cost from a maintenance perspective. It also allows components of the workflow logic to be moved to where they can be most efficient. For example if some customer data is sent to your CRM system and other customer data is sent to another one, the brokering logic that makes that decision can be put in central router rather than in the message queues.

In transactional systems, where a high number of systems that have to be touched after one infinitesimal change, not only are the number of touch points reduced (which drives down the maintenance costs), the system can perform better since each of the discreet parts gets to hone in on their core competencies without the distractions of problems that other parts are better suited to work on.

Another benefit of this infrastructure framework is that it introduces a level of fault tolerance into the system. If the primary CRM system has failed or isn't responding, the central post office will not only detect the fault, but know where to re-route the traffic. In a very small way, this reliability aspect is one reason that services-oriented architectures (SOAs) are being associated with concepts like utility and on-demand computing.

Enterprise Service Bus
In some circles, the architecture being described here is called an Enterprise Service Bus (ESB). Scott Cosby, program director for IBM's WebSphere, cautions that there is no single definition of an ESB. "Each customer is likely to create their ESB in a unique way," Cosby told me. "The most important thing to remember when considering the definition of an ESB is that flexibility is key. There's no fixed way to build one. As long as you have a connectivity layer that optimises information distribution between service requestors and service providers, one that can respond to event, message, or service-oriented contexts, then it's safe to say what you have fits the description."

As a reminder though, Cosby says there's really nothing new about this arrangement. It's a well-known approach -- one that application server vendors like IBM and BEA have included in their offerings for quite some time. Where this architecture really starts to take on the characteristics of a bus, much like the bus inside a computer into which adaptor cards of various types can be plugged because they're all in agreement with an interconnect technology like PCI, is where standards get introduced. Reflecting on the legacy ESBs in place, Cosby pointed out that 80 percent of the business transactions taking place today are done over EDI. For this reason, Cosby warns against ESB definitions that fail to acknowledge what's already in place. That said, Cosby agrees that XML and Web services are great ESB enablers.

As most people know by know, protocols like XML and SOAP (the building blocks of Web services) have significantly greased the wheels of integration by giving systems that must interoperate with each other something they can agree to--a mutual understanding of how to read the data, instructions, and service invocations that each system is putting into its message queues. By doing this, XML-based Web services removed yet another factor of hard-wiring from the old school ways of integration. Before, that mutual understanding had to be programmed somewhere into the logic, and the methods were largely proprietary.

Once Web services came on the scene, everything changed. Not only was there a standard language over which systems could interoperate, but the desire to get systems interoperating is on the rise because the obstacles are diminishing. But, as more systems get tossed into the integration fray, the load on the overall architecture increases and, if any one part ends up overloaded, it might not be long before the cracks start to show.

Keep in mind the downside of adding more gears to the mix. If one of those gears falls out of synch, then the rest could end up waiting. The system is not unlike that of a car's engine. An engine's peak performance depends on the precise timing between the process that opens the valves that let the air-fuel mixture into the engine's cylinders, the compression of that air-fuel mixture by the piston that hopefully is on its way towards the top of the cylinder, and then generation of the spark that ignites the compressed air-fuel mixture (the resulting combustion of which pushes the piston back down through the cylinder so as to turn the crankshaft and, ultimately, the wheels of the car). If, in your car, any one component of that process starts to experience a problem, or gets ahead of the others, the engine sputters when you hit the gas pedal.

Between the need for good performance of real-time processes, the requirement of well-timed components, and the increased interest in integration, there are still opportunities to break down the interoperations. While Web services gives systems a common language with which to hold their discussions, there still isn't a lot of agreement over the structure of the data being passed back and forth (known as the XML schema) or the vocabulary. The SAP ERP system may be Web services compliant, but its schema for customer data may be very different than that of the Siebel CRM systems to which the data might be sent. For example, what fields are being tracked for each customer, what are the exact names of those fields, and how long are they?

When two or more systems have different native treatments for what is ultimately the same data, those differences have to be resolved somewhere before they can interoperate. But where should this translation take place? In the application or transaction logic? In the message queue? In the central post office?

The answer, according IBM's Cosby, is basically any of the above. Given the state of legacy systems where this transformation takes place, the ideal solution will be highly situational. At the same time, if you notice that saddling any one of those components with the responsibility of transformation is causing the engine to sputter, you still have options that conform to the notion of an ESB.

For example, why not have a single, common data format to which all interoperating nodes must resolve, and then give each system access to that format through a transformation adaptor. This approach where you have software adaptors that help adapt the native data coming from each of the endpoints to a common data format where it can move about the "bus" through brokers until it finally comes in contact with another adaptor on another system that in turn takes care of the "outbound translation", suggests BEA's Linkin, is clearly in the spirit of an ESB.

Both BEA's Linkin and IBM's Cosby agree that as with any other bus-oriented systems, a management layer is needed. Linkin describes this as a layer that's charged with housing a directory of the services that are available on the bus around which policies can be set and security can be applied.

If you haven't taken a ride on the Enterprise Service Bus yet, perhaps now is the time to check it out.

Editorial standards