X
Innovation

A robot that follows a human commander and responds to gestures

This week marks another incremental step forward in the field of robotics, specifically around gestural control.  Brown University reports that their researchers have demonstrated  how a robot can follow nonverbal commands from a person in a variety of environments — indoors as well as outside — all without adjusting for lighting.
Written by Chris Jablonski, Inactive

This week marks another incremental step forward in the field of robotics, specifically around gestural control.  Brown University reports that their researchers have demonstrated  how a robot can follow nonverbal commands from a person in a variety of environments — indoors as well as outside — all without adjusting for lighting. The modified iRobot PackBot can physically follow a person at a select distance without the person needing to wear special clothing, be in a special environment, or look backward at the robot, according to Chad Jenkins, assistant professor of computer science at Brown University and the team's leader (pictured here).

Jenkins and his team presented the achievement and a paper at the 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2009) in San Diego this week. The video below shows the robot following gestures and verbal commands.

In the video, the robot responds to a variety of hand-arm signals that instruct the robot to “follow,” “halt,” “wait” and “door breach". Both, within an indoor hallway and an outside parking lot, a Brown student walks with his or her back to the robot, while the robot obediently follows like a dog at a maintained distance of about three-feet: even when the student backs up or turns around and approaches it.

Credit: Nathan Koenig, Brown University

Credit: Nathan Koenig, Brown University

The team developed this capability with two key advances. The first involved visual recognition, which when applied to robots means helping them to orient themselves with respect to the objects in a room.  A computer program was written to recognize a human by extracting a silhouette, as if a person were a virtual cutout. This allowed the robot to "home in on the human and receive commands without being distracted by other objects in the space", according to a Brown University press release.

The second advance was to provide the robot with a way to maintain a set distance from a human commander, and that involved a special depth-imaging camera called a CSEM Swiss Ranger. It uses infrared light to detect objects and to establish distances between the camera and the target object. The light also measures the distance between the camera and any other objects in the area, which according to Jenkins the measurement enabled the Brown robot to stay locked in on the human commander.

"The result is a robot that doesn’t require remote control or constant vigilance, Jenkins said, which is a key step to developing autonomous devices. The team hopes to add more nonverbal and verbal commands for the robot and to increase the three-foot working distance between the commander and the robot."

The research, like most robotic projects, was militay-funded by the U.S. Defense Advanced Research Projects Agency Information Processing Techniques Office (DARPA IPTO) and by the U.S. Office of Naval Research.

Editorial standards