Last Friday PC World carried a report by Nancy Gohring under the title Microsoft: Stodgy or Innovative? It's All About Perception.
In it Ms. Gohring, who no doubt felt a thrill going up her leg at the time, reports on a demonstration of new Windows Surface technologies given by Microsoft's Craig Mundie:
At the meeting, Craig Mundie, chief research and strategy officer, showed off a futuristic application for Surface, Microsoft's multitouch tabletop computer. He virtually entered an art gallery on a downtown Seattle street, browsing through items that he could pick up and spin around to look at them from all directions.
In another demonstration, he took a photograph of a street and his handheld computer identified it in real time and began displaying information about shops on the street, including information about table availability in a restaurant.
After the demo, one analyst commented to Mundie that the technology looked great but that the rest of the world doesn't get to see such demonstrations, and he urged Mundie to spread the word so that people will perceive Microsoft as the innovative company that it is, rather than as a legacy software vendor.
The whole article is part of an effort to spread the good news that Microsoft's image should rightfully be that of the software innovator rather than that of the legacy exploiter - but, Microsoft's history aside, I have problems with this demo.
Specifically, the processes through which a photograph of an unknown location taken from an uncharted perspective can be used to precisely identify the actual location are neither innovative nor within range for any of today's handheld computers.
Ingres, back in the 1970s, was designed to accommodate research on image management and automated content based indexing - but the processors of the day couldn't begin to handle the load. In the late eighties PostGres achieved a basic level of functionality for this using Sun and Vax clusters as backends, and in the late nineties its commercial variant, Illustra, came with an optional applications plugin combining batch indexing with near real time image feature recognition and search. This worked well, provided you had either a lot of patience (about 38 minutes on a Sun 450/4 (296Mhz) for every second of video feed) or a sufficiently large set of processor resources - like a four way Vax 8400 cluster or a pair of Sun 10Ks.
After Informix took over Illustra and then impaled itself on NT, the technology moved to IBM - where a few of the very blessed can now finally do this kind of thing in near real time for known perspective imaging using nothing more than a pair of Cell2 boards.
The nerds on my favorite television series, NCIS, can identify the actual location shown in a ground level photograph of a forest scene by comparing it to satellite sourced topographic imagery, but in real life google doesn't have enough computers to let you hold your iPhone out the car window, click once, upload that to google, and have it send you back the phone number and driving directions for the nearest steak pit with an empty table before your wife can spot some place that serves two leaf salads and make you go there.
The demo, in other words, was a rigged actualization of ideas from the 1970s sold to analysts and reporters as proof of Microsoft's switch from legacy exploiter to innovator - and because the demo does exactly the opposite it's paradoxically also exactly the kind of Windows change I really can believe in.