X
Innovation

IDF Day 3: Putting the future in context

Intel CTO Justin Rattner's keynote at IDF on Wednesday was all about context – something that Intel thinks is going to be the key driver for new personal technology over the next few years.It's another step away from the old idea of IT being about data.
Written by Rupert Goodwins, Contributor

Intel CTO Justin Rattner's keynote at IDF on Wednesday was all about context – something that Intel thinks is going to be the key driver for new personal technology over the next few years.

It's another step away from the old idea of IT being about data. Data is easy: we're so good at creating and storing it that we managed 800 billion gigabytes last year. But context is hard: what does that data mean and what conclusions can we draw from it are two questions that only make sense with context – and without the answers, the data is so much noise.

The sort of context Rattner was talking about is more personal. Where are you? What are you doing? How should your technology behave? Mobile phones have already started along that path: a phone knows which way it's being held, and rotates its screen accordingly; it knows when it's being held to your ear and turns its screen off, it can even change ringtone according to where you are.

With more information – hard information from sensors, and soft information from usage observations – technology can infer more. One example was using location, calendar, audio sampling and software usage, to spot when you're in a meeting and, further, when you're actually giving a presentation and should not be interrupted by anyone, not even by IM or text.

Don't you know all that already? Well, yes, but the utility comes from technology knowing automatically (and getting it right: anyone who's tried to use a smartphone while lying down knows the intense frustration when the screen orientation gets it wrong. Contextually aware technology has the potential to push IT annoyance into new levels of stratospheric anger.).

Two more steps change the game even more. If you share your context, other people and other technologies can make much better decisions about when and how to interact and share information – and you'll already have spotted the good things that could happen then, as well as the dangers for personal security and privacy. But the last and biggest step is time: by watching the way your context changes over time, the technology creates a map of your life to an intimate degree.

What happens then? Let's take health, another Intel obsession. Many diseases have very gradual onsets, with signs that go unnoticed even by professionals: something that understands what you do in the context of your life can derive the pattern early and start diagnostics early. Or there can be patterns you don't know in your financial affairs, patterns with predictive power, and by combining those with a deep knowledge of your behaviour and habits some amazing advice becomes possible.

It's at this point that a ghost appears – the spirit of artificial intelligence. For the business of sensing the world, judging the probabilities of what that sensory data meant, judging the probabilities of what will happen next and arriving at a decision for future actions is a pretty good description of the conscious mind at work. At some point, contextually aware computing and inference engineering will start to surprise us – it will make suggestions and observations that we could never make ourselves, but which are good. In short, it will start to become creative. And while nobody knows what intelligence actually is, creative reasoning is a big part of it.

AI has had a bad press, largely because it has failed so badly in the past. We couldn't do it by writing huge programs in special languages or bottom-up designs of complex machines. But take a computer that can learn and teach it – and things happen. As Rattner told me in a meeting after his keynote, machine learning is making some amazingly fast progress that surprises even him.

The technical details of how this works aren't that complex: there are some key concepts in probability maths, such as Bayesian inference, but the actual logic is surprisingly simple. You need to do an awful lot of it, and there are plenty of challenges in knowing about noise and errors, but we are exceptionally good at doing an awful lot of simple logic – Intel quintessentially so.

It is hard to avoid the conclusion that something wonderful is happening, that our tools, our expertise, our culture and our commerce are aligning in the right way to accelerate these developments.

During my meeting with Rattner, I asked on Twitter whether anyone had any questions. Long time tech journo Wendy Grossman did – does Rattner still believe in the singularity, she asked?

(The singularity is a concept invented by Ray Kurzweil: the point at which computers are not only intelligent, but more so than we are – and can thus refine their own intelligence at an ever-increasing rate.)

He laughed. "I've managed to avoid that question for a while now." But as he went on to describe his observations of machine learning and the various quickenings of pace,"and you look twenty, twenty-five years ahead, and that's not so far from what Ray says", I inferred from the context that he probably did.

And while I don't subscribe to that particular future, it's clear that something is going to happen and it's going to be extremely interesting. It's good to report: here comes the future.

Editorial standards