We’re all used to applications that are at best frustrating, and at worst infuriating. They just don’t seem to do what we want, when we want it.
After all, there’s all that processing power inside our PCs and they just don’t seem to know that we’re in a hurry, or that we’re really after more in depth information. We’re standing on street corners with mobile phones trying to work out whether we have an overdraft on complex banking sites that insist on trying to sell us loans. Our GPSes don’t give us a quick and simple way of letting our friends know just when we’re going to arrive – or our business contacts why we’re going to be late for a meeting. And how do our machines and services cope with human scale transactions that can last days, or even weeks?
There’s a simple answer: it’s a failure of context.
We live our lives contextually, knowing just why we’re doing something, or just where. Software can’t “know” that, as it’s operating in isolation, locked away from the inputs and information that can contextualise the actions it’s completing. It’s a fault of the procedural model, where inputs drive outputs and nothing else matters.
But everything’s changing now, thanks to the cloud, and to service oriented programming models. Applications can now be event driven components, operating in a stateless environment, tied together in frameworks through workflow and messaging. Separated from presentation layers and with user interfaces that bring things together, they’re able to handle human scaled interactions, from all types of device. Tools like YQL and Yahoo! Pipes transform and combine data from multiple sources, bringing data to our applications without requiring complex query languages and expensive integration
That’s only part of the story needed to deliver contextual applications, and it’s actually a very small part. What we really need are tools to help us determine context and to make it available to our applications.
Over the last couple of years I’ve realised just what that context mediating tool is. It’s the smartphone. Whether it’s WinMo or Android or OS X or BlackBerry OS doesn’t matter – what matters is that it’s the one piece of our computing arsenal that knows where we are, what we’re likely to be doing (and who with). It knows the people we work and socialise with, the places we do it (and how frequently). It knows how to get there and how fast we’re getting there (and if we’re on the right route).
That’s why it was good to see RIM understanding the role of the smartphone at their recent Developer Conference. It’s tiny steps, but the announcement of an API based on the work done by recent acquisition Dash is very good news for anyone trying to solve the context problem.
Dash began life as a PND. Not just any old PND, but one where each connected device was a real-time probe into traffic conditions. The original Dash hardware even used user driving patterns to create maps in near real time (and when the new Mountain View junction on the 101 opened, Dash had it mapped and in its routing software within a couple of hours). The new Dash isn’t a navigation tool – it’s the old Dash’s secret sauce, the routing algorithms that use that sensor data to predict just when you’ll arrive at your destination.
With a world blanketed by BlackBerry sensors it’s easy to imagine context sensitive alarms. You need to be at a meeting by 10.30? If the traffic’s good, then you might get an extra ten minutes in bed. If it’s bad, well you’re up a little earlier than you initially planned. If the traffic gets hairy while you’re en route, then why not have your phone automatically text the contact in your next appointment to let them know that you’re running late – and also keep them updated with an ETA. There’s no breaking mobile phone laws, as it’s all automatic, driven by the context of your journey.
Dash isn’t the whole solution to the context problem, but it’s a good example of just how we can use an API that uses contextual information to change the way we do things, making things more human centric and a lot less stressful. As APIs start to mix personal informatics, using tools like Outlook 2010’s social connector to mine the information we have in our email and address books and in the social networks and services we use every day, with real time sensors giving applications access to the real world, we’re in for what could be the biggest effect IT has had on society yet.
This could well be fun – and extremely profitable for the companies that take advantage of context and what it can do for their users.
It could also make Big Brother look like an elementary school bully.
Let’s make it the first option. Human scale computing is an answer to so many different problems.