X
Innovation

Meet Simon, the social robot that can learn and adapt to humans

Around the world, researchers are working to make "social robots" a part of everyday human life. One such machine, Simon, can navigate busy environments and help clean up a workspace.
Written by Christina Hernandez Sherwood, Contributing Writer

Across the country -- and around the world -- researchers are working to make "social robots" a part of everyday human life. From helping the disabled to live independently to tutoring struggling students, these human-like machines could play a major role in our future.

At the Socially Intelligent Machines Lab at the Georgia Institute of Technology, assistant professor Andrea Thomaz is among the researchers designing interfaces to allow robots to learn from everyday people. Thomaz answered my questions last week about her latest project, Simon.

How do robots learn?

Generally speaking, at any point in time the robot can perceive being in a particular state as detected by all of its sensors. Learning is the problem of building a model of what actions to do in each state with respect to a particular task goal. Often we work with what is called a "supervised" learning domain, where the robot learns from examples or demonstrations of what to do given by a human partner.

Talk about Simon.

Simon is an upper-torso, humanoid-like robot with two arms, two hands and a socially-expressive head. We designed the robot specifically to research human-robot interaction, so this led to several design choices for the Simon robot. First, the torso and arms are roughly the proportions of an average-sized human. We want people to work alongside the robot without being intimidated.

Second, the [motors] are "series elastic actuators," which means they are designed to be compliant. This is a trade-off because a very stiff robot is easier to precisely control, but the compliance of Simon's actuators creates a safer interaction for humans. For example, in a recent demonstration we used this to achieve object hand-offs between a human and the robot, where the robot knows when to grab an object by sensing something being pressed into its hand. Similarly, it knows to release an object it is handing to a person when it senses they are pulling it.

Third, we designed a socially-expressive head that has both human-like and non-human-like elements. It has eyes, eyelids and a neck, which primarily let us give Simon human-like eye gaze behaviors. This is an important social cue that facilitates many aspects of human interaction, and we are developing computational models to produce appropriate eye gaze behavior for Simon in various situations. Additionally, the robot has two articulated ears that have RGB LED panels that allow them to light to any color. The ears serve as a non-anthropomorphic communication channel, which could include emotional expression and other nonverbal gestures, [such as] interest, confusion, or surprise.

What can Simon do?

We recently did a demo at the CHI 2010conference, and one of things we worked on for this was Simon's social attention capabilities. When faced with a busy environment, we want the robot to look around the environment in a socially-appropriate way. This involves assessing the incoming perceptual stream and determining what is salient in the environment. Currently, Simon can perceive both the visual and the auditory environment and assign saliency values. Everything is fighting for the robot's attention. If a loud sound is perceived, the robot might glance in that direction and then look back to see people are trying to get the robot's attention by waving objects.

The other skill we showed off was interactive task learning. Simon learns to clean up a workspace, learning a model for what kind of objects to put where. The human partner hands Simon an object and indicates what should be done with it. The teacher can let Simon ask questions by saying "Do you have any questions?" Then, Simon will scan the workspace looking for any objects that it is uncertain about. If such an object is found, this will lead to a query like, "What should we do with this one?" A model is learned in just a few examples. Then, the human can hand Simon new objects and they will be sorted into their proper locations.

What's next for Simon?

We are continuing to work on Simon's interactive learning skills. Some things we are developing this summer have to do with nonverbal gestures for natural and intuitive turn-taking, which we hope to show will improve the learning interaction.  We'll be taking Simon out of the lab again for the AAAI 2010 Robotics Exhibition in July. We are participating in the Learning by Demonstration Challenge.

What challenges do you, and others, face as you try to bring social robots into our everyday lives?

We need to make advances in a variety of core technologies, like navigation, manipulation and perception. The challenge that my lab is concentrating on is learning, with a focus on learning with human input.

What's your goal?

I want to see robots successfully helping people in human environments and, in particular, I want those robots to be easy for people to adapt and use in whatever way they see fit. You shouldn't have to learn how to program your robot. It should be intuitive to teach it what you want it to do for you.

Image, top: Simon / Courtesy of Georgia Tech

Image, bottom: Andrea Thomaz / Courtesy of Georgia Tech

This post was originally published on Smartplanet.com

Editorial standards