Cheap 3D vision bringing robots into the real world

Ever try to navigate a room with one eye closed? Robots have been attempting it for years.
Written by Greg Nichols, Contributing Writer

Most robots that use camera sensors are confined to 2D perception. Wanna know how restrictive that is? Grab a racket, close one eye, and try to get through a set of squash.

A company called PIXMAP has a new 3D real-time robotics localization and mapping technology, and it could be a game changer. The new tech, called Reality Capture, enables robots and drones to map the world in photorealistic 3D. That means bots kitted out with Reality Capture can sense both geometry and colors, essential capabilities for a coming generation of autonomous machines that will operate in real-world environments.

Consider the tasks that personal assistant robots will be asked to perform. Robots will have to learn the rooms in a house, identify the front door, receive a visitor and accompany him to the host, water plants, feed pets, and serve snacks to guests of varying heights.

In the last 60 years industrial robots have exclusively performed repetitive tasks, but a new breed of collaborate robot will have a much broader assignment. With 3D visualization, the bots will be able to pick-and-place crates or retrieve parts. Robots will also be able to 3D model free space to position objects without collision in constrained environments.

PIXMAP is just the latest entry in a new category of affordable 3D sensing. Up until the last few months, stereo vision has been far too complex to pull on the cheap. That's because 3D sensing was accomplished optically, requiring precise calibration of sensitive physical cameras.

But late last year a Bay Area startup called StereoLabs introduced the first affordable high definition stereo camera. When coupled with a drone, autonomous car, or some other robot, the device can effectively give machines something like human vision, allowing for deft indoor/outdoor navigation at a price that's thousands or even tens of thousands of dollars less than the next cheapest technology.

The big breakthrough is that the latest stereo sensors don't rely on precise optical alignment, instead letting algorithms accommodate for any sensor variance. This hasn't received enough attention. Today, robots that navigate their environments autonomously rely on lasers, radar, infrared, or some combination of these technologies to gauge distance, recognize objects, and avoid collisions. Cheaper optical vision brings sophisticated navigation capabilities to inexpensive and hobbyist bots.

Combined with rapid prototyping, the plummeting price and increasing sophistication of sensors is changing the game in robotics.

Editorial standards