One of the things I had to leave out of my 4,200-word SmartPlanet feature on speech recognition from last month was a section on what Microsoft was doing with visual, as opposed to aural, interface technology.
The Kinect component of its popular Xbox gaming console -- a plastic stick jammed with sensors, essentially -- is opening up a new world of development for the Redmond, Wash.-based tech giant. In addition to the tremendous amount of audio data it gives the company -- that's the crux of the speech recognition piece -- Kinect is also the company's first major, public leap in acquiring and archiving visual data from people around the world.
In both cases, all that data is helping the company refine its systems to deal with the real world.
Many of Microsoft's rivals aren't publicly dabbling in this space just yet, so what I learned about visual interface didn't make it into the speech recognition piece. But I wasn't the only tech editor who found the topic fascinating. Tech site The Verge recently sent its editor-in-chief, Joshua Topolsky, to Microsoft Research HQ to play with "Kinect Fusion," the company's 3D modeling experiment connected to the device, and "Lightspace," a similarly minded venture that uses lasers and cameras to replicate the experience of using a conductive touchscreen display.
Both technologies have been around for awhile, but the way they're being applied shows how Microsoft's top R&D minds are thinking about that crucial intersection where perfect, rigid technology collides with imperfect, squishy humans.
Here's the video:
This post was originally published on Smartplanet.com