X
Innovation

When robots see red

We use all of our senses to interact with our environment, but are robots designed in a similar way? An Indiana University neuroscientist and a University of Tokyo roboticist have worked together with real and simulated robots to verify that valid information doesn't only come from the brain. Now, they think they can build better robots, but read more...
Written by Roland Piquepaille, Inactive

We use all of our senses to interact with our environment. Our brain and our body work together, but are robots designed in a similar way? An Indiana University neuroscientist and a University of Tokyo roboticist have worked together with real and simulated robots to verify that valid information doesn't only come from the brain. They measured the information flow from the environment to the robots, and then from the robots back to the environment by recording what the robots saw and what they did. And now, they think they can build better robots by taking into account all kinds of sensory information. But read more...

These experiments have been devised by Olaf Sporns, associate professor at Indiana University, and Max Lungarella, a University of Tokyo roboticist. Here are some reactions from Olaf Sporns.

"Really, this study has opened my eyes," Sporns said. "I'm a neuroscientist, so much of my work is primarily concerned with how the brain works. But brain and body are never really separate, and clearly they have evolved together. The brain and the body should not be looked at as separate things when one talks about information processing, learning and cognition -- they form a unit. This holds a lot of meaning to me biologically."

In "Vision-body link tested in robot experiments," New Scientist also quotes Sporns, who spoke about their approach, known as "embodied cognition."

"We saw causation of both kinds," Sporns says. "Information flows from sensory events to motor events and also from motor events to sensory events." It is an important experimental demonstration of this aspect of embodied cognition, he claims: "This work and that of others is now making it more practical and less of a metaphor."

You should read the two articles linked above. But this research work has been published by one of the journals published by the Public Library of Science (PLoS), Computational Biology under the name "Mapping Information Flow in Sensorimotor Networks" (Volume 2, Issue 10, October 2006). Here are two links to the full text and to a printable version (PDF format, 12 pages, 4.35 MB) of this article.

The illustration below has been extracted from this paper (Credit: Max Lungarella and Olaf Sporns). Here is what the researchers wrote: "We find that information structure and information flow can be mapped between a variety of sensory and motor variables recorded from three morphologically different robotic platforms (a humanoid robot, a mobile quadruped, and a mobile wheeled robot), each of which reveals a specific aspect of information flow in embodied systems."

Robotics sensorimotor interactions

On the left column, you can see the three robots used by the researchers: Roboto has a total of 14 DOF, five of which are used in the current set of experiments (A1); Strider has a total of 14 DOF, with four legs of 3 DOF each and 2 DOF in the pan-tilt head system (A2); and Madame has 4 DOF, with 2 DOF in the pan-tilt system and 2 DOF for the wheels (A3). On the right column, you can see how these robots interact with their environment: Roboto engages in sensorimotor interactions via the head system and arm movements (B1); Strider engages in sensorimotor interactions via the head system, as well as via steering signals generated by the head and transmitted to the four legs (B2); and Madame's behavior consists of a series of approaches to colored objects (B3).

Let's return to New Scientist for more details about the experiments.

[The researchers] used a four-legged walking robot, a humanoid torso and a simulated wheeled robot. All three robots had a computer vision system trained to focus on red objects. The walking and wheeled robots automatically move towards red blocks in their proximity, while the humanoid bot grasps red objects, moving them closer to its eyes and tilting its head for a better view.
To measure the relationship between movement and vision the researchers recorded information from the robots' joints and field of vision. They then used a mathematical technique to see how much of a causal relationship existed between sensory input and motor activity.

And what will this research be useful for? Simply to design better machines.

The experiments could suggest a better way to design and build robots, Sporns adds. Maximising information flow between sensory and motor systems could produce more flexible, capable systems, he says. Experiments involving more simulated robots, "evolved" using genetic algorithms, suggest this to be a promising approach, he says.

For more technical details, you should read some of the papers published by the Neurorobotics group of the Computational Cognitive Neuroscience Laboratory (CCNL). And if you want to know more about these robots, you also should read this article published by Indiana University's Life Sciences, from which I borrowed the title of this post.

Finally, please note that the original Indiana University story quoted above in this post is currently unavailable. This is why I've put a link to a copy published by ScienceDaily.

Sources: Indiana University, via ScienceDaily, October 27, 2006; Tom Simonite, New Scientist, October 27, 2006; and various websites

You'll find related stories by following the links below.

Editorial standards