X
Innovation

MIT employs Kinect and lasers in real-time mapping gear for firefighters

The wearable set of sensors, built using a modified Microsoft Kinect and LIDAR, is meant to help emergency responders map out an interior space as they move through it.
Written by Jack Clark, Contributor

With the aid of a Microsoft Kinect, a laser range finder and a laptop, MIT researchers have built a wearable piece of gear that maps a building in real time.

The prototype SLAM (Simultaneous Localisation and Mapping) equipment, unveiled by MIT on Monday, is designed for use by firemen and other first responders reacting to an emergency.

"Our work is motivated by rapid response missions by emergency personnel, in which the capability for one or more people to rapidly map a complex indoor environment is essential for public safety," the researchers wrote in a paper describing the technology (PDF).

Funding for the project came from the US Air Force and the Office of Naval Research. The announcement came ahead of the paper being delivered at the Intelligent Robots and Systems 2012 conference in Portugal in mid-October.

mitmap
The SLAM prototype pairs a Kinect with a laser range-finder to map a building in real-time Image: MIT

The sensor (pictured) works by scanning a building in a 270 degree arc with a LIDAR (Light Detection and Ranging) laser and combining this information with depth and visual data generated by a Kinect.

This information is sent to a processing unit — in the prototype, a laptop — in the user's backpack, which then builds the map.

Along with this, a MicroStrain inertial sensor is used to account and compensate for the wearer's gait.

The technology can build maps of multiple floors within the same building. It can distinguish between floors using a barometric pressure sensor, as readings from this are combined with those from the inertia kit to work out when a person is using an elevator or climbing stairs.

Built for people, not robots

What sets the MIT system apart from other approaches in real-time indoor mapping is that it is designed for people, rather than robots. In addition, it does not rely on other bits of information about the environment, such as the locations of other phones of Wi-Fi points, to construct its maps.

The inertial sensor is what allows the system to compensate for how a person walks; by contrast, robots walk at a level, predictable rate.

The kit can also be miniaturised to make it more appropriate for emergency responders. "We envisage that the final device will be a hand-held unit, similar in size to a miner's lamp or... installed on the shoulder of the user," the researchers wrote.

This could make it easier for people in HAZMAT suits to use the device, they suggested.

"What they definitely tackled is the problem of height and dealing with staircases, as the human walks up and down," Wolfram Burgard, a professor of computer science at the University of Freiburg in Germany, said in an MIT press release. "The sensors are not always straight, because the body shakes. These are problems that they tackle in their approach".

Indoor independence

In addition, the technology does not depend on anything outside the wearer, so it can still work in buildings without power or RF devices.

At the moment, sophisticated indoor-mapping systems usually depend upon using other information points in the local environment, such as Wi-Fi stations or mobile phones, to  triangulate and map an environment.

An example of such a system is SenseWhere, which uses GPS from a wearer's smartphone combined with the location of other RF access points to provide accurate triangulation.

mapmap
The MIT system's maps can be built on the fly and can fix errors if a person retraces their steps. Image: MIT

Unlike SenseWhere, the MIT system builds maps (pictured) without other inputs from the interior environment.

However, this means that in particularly large or complicated buildings, maps can become disjointed as the software has trouble stitching different parts of the floor plan together. This can be dealt with by the wearer retracing their steps, according to the MIT researchers.

The Kinect's camera is used to tell whether the person has been in a location before and, if they have, to check whether the data they gather on their second walk-through differs from the first readings. If so, it will try and fix the map. Repeated passes through tricky areas can clear up faults, the researchers wrote.

The MIT team has posted a video to YouTube of a walk through an internal space:

Editorial standards