"From the desktop to the enterprise." That's sort of the way Redmonk principal analyst and co-founder James Governor and I summed things up this week when we decided that the same sort of architecture for integrating loosely coupled services on the server side is probably the way it should work on the desktop too.
We're already seeing desktop applications talking to server-side services via XML-based Web services. Why not to each other then? OK, so it is being done in some circumstances. But why not exclusively? Why not rid the world of proprietary integration interfaces and stick strickly to standards-based implementations of the various Web services specifications in order to facilitate any and all integration?
Well, one issue is that a lot of desktop applications that talk to each other (for example, just the ones inside of Microsoft Office, but there are many others) do so within the context of a single OS. XML Web services on the other hand are generally designed to work across an HTTP (Web) network. It's not that applications can't talk to each other through non-networked XML Remote Procedure Calls (RPCs) or that those same applications can't make a round-trip to the network and back just to speak to some other software component. It's just that the minute that the context of the integration changes, things get complex and there's more chance for failure. So, why not make the environment more predictable? Just do everything across the network. This way, there's no exception handling on either end (Oh. Hi you little RPC you...., you're from the same machine? Then we'll do these three steps a little bit differently).
The idea implies something else -- that each application is running by itself in an independently addressable machine that's connected to the network. Today, this is done on mainframes and servers. Why not on desktops. I had this idea when I was busy building a bunch of virtual machines yesterday. I thought to myself "Self, as long as I've got one VM for work, another for play, and another for testing, why not go the next logical step and simply have one VM per application? One for Outlook. One for Firefox tab #1. One for Firefox tab #2." You get the picture.
Today, we're just not there yet. If you're a power user like me and you run upwards of 5 or more applications (not counting what's running in the toolbar) simultaneously, the idea of running them all in their own VMs would bring all but the most tricked out PCs to their knees (not to mention, if, in the case of Windows, those apps are integrated via ActiveX, the integration would, well, disintegrate). Today, it's relatively impossible to imagine such an architecture. But tomorrow? Clearly, between Moore's Law, higher bandwidth networks, and the direction that both Intel and AMD have taken when it comes to baking virtualization into their chips, the muscle for this sort of architecture will be a reality. The question is whether or not developers will go for it or not.
Our discussion of this architecture was provoked by James' summary of what he saw at a recent IBM SOA event where, according to him, the attendees were clearly grokking SOA as an approach to their IT infrastructures.
To hear this Monkcast, click play above (or the MORE button to download it). Or, you can subscribe to our Monkcasts by pointing your podcatcher at this RSS feed (learn more about how to get it on your PC or MP3 player automatically).