'Minority Report' gestural computing pretty much here

Forget the desktop environment with one or two screens. Oblong Industries' spatial operating system extends your workspace to every available screen in a room. With the latest developments, the technology is a frontrunner for the future of the user interface.

You've seen it on the big screen and on TED, now the idea of manipulating digital data in 3-D space as if it were Play-Doh is taking a big leap into reality thanks to the visionary work of John Underkoffler, Chief Scientist, Oblong Industries.

The inventor of the futuristic computing interface used in the film Minority Report spoke yesterday at the Net:Work Conference in San Francisco.

Underkoffler argues that much of the sci-fi applies to the desktop of tomorrow, noting that the next big computing disruption should come in advances in user interfaces, because that's "all you have."

"Behind the scenes, computers and networks are still abstract machines that essentially flip switches, but people don’t think in the abstract." We as humans tend to think in concrete terms, such as time and space. Therefore, it is Oblong's duty, said Underkoffler, to bring computers to their rightful place -- the real world.

To "de-abstract" the machine, we need to look at space and how humans use it when pointing their fingers to reference and connect to distant objects (or pixels). This most basic of gestures is the linchpin for Oblong's spatial interface as it serves as the unifying means to access dozens of networked screens in a room.

The second idea ripe for scrutiny, said Underkoffler, is that despite the existence of computer networks, devices and displays are "solipsistic islands" isolated from one another when in fact they should be "socialized."

"Part of our project will be to socialize the machine, but not in the sense of social networks. Let the computers talk to one another in a meaningful and rich way," he said.

Oblong is refining its g-speak spatial operating environment, which has roots extending back to three decades of research at MIT Media Lab. The device-agnostic system is designed to allow for a spatial user interface that works with networked computers so that dozens of screens can be seamlessly utilized by multiple workers in proximity and remotely.

The system assigns every single pixel with a three-space set of coordinates that is unique from all other pixels in a room, providing for a way to relate different screens in response to gestural input.

This year, Oblong launched Mezzanine as a way to apply the technology to the work environment. Watch Underkoffler's talk for the latest thinking on what the future of computing may look like:


frog creative chief: think outside the computer box Three reasons why telepresence robots trump videoconferencing Why the future of mobile is screenless, touchless