What did you do for the holidays? Did you visit the relatives? How about skiing or snowboarding? Or maybe you raided your local retailers the day after Christmas. Not me. For a better part of my holidays, I sat glued to my notebook preparing my custom-developed, editorial constituent relationship management system for its next architectural revolution.
With my project, which I described in a previous column, I'm at the point that most projects reach where I'm trying keep my future technology options as open as possible. At the same time, I'm realistic about the fact that some decisions I'm about to make will lead me past the point of no return with some technologies, products or vendors.
Data, mobility and collaboration are the biggest sources of my angst. Ultimately, the most difficult challenge has to do with making sure that anyone (especially me) who wants access to the data not only has that access, but also has the most recent version, even when they're mobile.
In theory, Web services, which make it possible to integrate systems that previously had difficulty integrating with each other (such as a Java-based handheld with a .Net-wrapped SQL Server database), can underpin an anytime, anywhere, any device dream. That's in theory, however.
The more I sink my teeth into the architecture of my project, the more I realise that its greenfield nature has given me the luxury of minimising the amount of dissimilar system integration that's necessary. I should select technologies that are or will eventually be Web services-enabled to allow for interoperability down the road. For now, picking technologies that are designed to work together is the path of least resistance and more importantly, fewest headaches. Web services may serve as an insulation layer between systems that can't natively talk to each other, but they're also a layer of complexity I can do without for now.
Another problem with the Web services approach is the role that the network plays. When I first got excited about Web services, the architecture I had in mind was one central database that was accessible from multiple clients including handhelds and desktops via the Internet. There would be one copy of the data, and synchronisation wouldn't be an issue because all clients would be working with real-time data. When I woke up from that dream, I realised that I spent far too much time working with my notebook or my handhelds in places where there's no network, much less the Internet.
The Blackberry 7230 PDA/Phone that I'm testing, for example, gets its connectivity from T-Mobile. While T-Mobile has done a good job "lighting up" major US geographies with a signal, once you venture off the beaten path -- say to Route 1 in Southern New Hampshire, where every other wireless service seems to work -- all connectivity bets are off. This is why I say to readers who ask me which PDA/Phone to get (or just which phone to get for that matter) that the three most important criteria are the network, the network and the network.
Unfortunately for us, you can't pick any gadget and connect it to any network. For the most part in the US, you pick the provider (T-Mobile, AT&T Wireless, Cingular, Sprint PCS, Nextel, Verizon Wireless, etc.), and the provider gives you a list of devices from which to choose. If the device you want comes from the network provider with the best coverage of the areas you like to roam, then you're in luck. Even then, be prepared for radio silence. It will be quite some time before the various "cellcos" have most of the US blanketed with 3G-rated wireless connectivity.
In the meantime, what's a developer of a mobile-ready application to do? The answer is the same as it was a decade ago: synchronise. With no guarantee of connectivity, the clients for my application will be better off with a local copy of the database than trying to access a master copy through Web services or via native interfaces. That's not to say that a master copy of the database won't be available through the Internet for queries from a public Web terminal and eventually for some sort of Web services-enabled application. For now, since I can't depend on the network, synch is king.
All along, I had a hunch that synchronisation would be a requirement, but I wasn't sure where it would lead me. In its current form, where it's confined to a single notebook system, my application is written in Visual Basic and, on a per email basis, it shuttles data back and forth between Microsoft Outlook and Microsoft Access. After discussing my plan with Microsoft, it has become clear that Access is not a good choice if the data needs to be accessible on handles and via the Web as well.
For example, regardless of what handheld I pick (Microsoft-based or not), Access is a dead end if the data needs to be replicated to a handheld or a database server. Since a majority of what I'm storing in the database is contact data, some readers have suggested that I consider using the contact repository of Outlook as my database, given Outlook has many tools for synchronising with other systems. However, the highly non-relational nature of Outlook's address book makes it a poor choice for a database with 1500 records (and growing), and with a lot of one-to-many relationships.
Incidentally, this is one problem Microsoft is aiming to fix when it rolls out the next version of Windows (code-named Longhorn). Longhorn's underlying file system -- known as WinFS -- will be based on the same technology as the next version of Microsoft's SQL Server (code-named Yukon). One advantage of the Yukon-based technology, according to Microsoft, is that any data that's stored in WinFS will easily replicate to other WinFS-based file systems. Not surprisingly, the replication capabilities of Microsoft's RDBMS technologies are partly responsible for leading me in the direction of an RDMBS for my application.
As it turns out, Microsoft's SQL Server 2000, for which there is both a desktop and handheld (Windows CE) edition, has built in replication technology that unburdens developers from programmatically dealing with synchronisation. For example, using a technique called merge replication, data can be synchronised across the server, desktop, and handheld versions of a database.
The availability of such built-in replication capability makes for a very compelling approach to mobility, especially when wireless networks are less reliable than some would have you believe. But is Microsoft SQL Server the answer? Perhaps, but one thing is clear: if you want to leave as much of the replication process as possible to the technology (and not to the programmer), you need to stay within the same family of products.
If servers, desktops and handhelds are involved in the solution, and you're going with SQL Server CE (the version of SQL Server for PocketPC-based devices), then you're best off using SQL Server on all machines. This also means going with a Microsoft operating system on all machines as well since Microsoft doesn't have a version of SQL Server for devices like the BlackBerry or for operating systems such as Linux. Many IT shops prefer this homogeneous approach. Likewise, there are other "single family" answers if you want to venture beyond Microsoft. Sybase and its subsidiary iAnywhere offer the same sort of replication technology within their family of server, desktop, and handheld-based products, and some will find Sybase's openness to other platforms, such as Linux and Blackberry, to be attractive.
I haven't yet decided which approach to pursue. While I'm evaluating the various database families, I'm also rewiring the code in my application to be more RDBMS (as opposed to strictly MS-Access) friendly. This involves the use of certain Microsoft Active Data Objects (ADO) classes that, fortunately, are backwards-compatible with Access, but that will support other databases simply by changing one parameter. Once I decide on which database, I'll get back to you, but one design decision is clear -- synchronisation is a must.