Robots need culture says Sony scientist

A researcher says that the next wave of robots will be able to interact with one another, form their own languages and evolve new kinds of intelligence

Luc Steels, a professor at the University of Brussels and director of Sony's Computer Science Laboratories in Paris, wants to make robots more like living things by teaching them how to express themselves. It is a concept that has met with resistance from some quarters. In Steels's view the breakthrough that will take robots beyond the Aibo stage will come from allowing them to interact, form their own languages and even "cultures" rather than focussing strictly on how individual machines behave. This is in contrast to those who see the path to robotic intelligence simply as a matter of constructing increasingly complex machines. If Steels's theories gain the upper hand it would mean a new direction for the robotics field. Today development tends to focus on machines that have increasingly complex behaviours and can learn new behaviours. Steels believes the next step is for robots to learn by forming concepts they can swap with other robots, thereby developing their own "minds", just as humans do. "We're not going to get very far with robots if we don't focus on social interactions," Steels said. He presented a recent paper on the subject: "Evolving and Sharing Representations through Situated Language Games", on Thursday at the conference "Biologically Inspired Robots" in Bristol. Steels's most ambitious projects to date have involved thousands of software agents (bits of code) transporting themselves across the Internet to control robots in different cities across the world. The agents could be "taught" which words to associate with the objects seen by the robots and then use these words to interact with other agents. The spread of words and meanings among the agents follows similar patterns to those found in human culture. Steels also works with Sony robots such as Aibo and an upcoming bipedal device, adding custom software to enable them to play word games. An alternative to Turing
Steels's work deals with machine intelligence but it's a fundamentally different view from that embodied in the famous "Turing test". According to the Turing theory, a human-like intelligence has successfully been created when a human can't tell the difference between a conversation with the artificial intelligence and a real one. "I think the Turing test is a bad idea because it's completely fake," Steels said. "It's like saying you want to make a flying machine so you produce something that is indistinguishable from a bird. On the other hand, an aeroplane achieves flight but it doesn't need to flap its wings." Similarly Steels believes that machines can evolve intelligence through interaction with one another and with their ecology -- but this synthetic intelligence it is unlikely to bear much superficial resemblance to human intelligence. In one sense, Steels joked, the Turing test has already been passed -- by the Aibo. He demonstrated with a video clip where an Aibo approached a dog eating a piece of meat and was treated just like another dog -- it was attacked. He noted that while entertainment robots can interact with humans -- and particularly children -- through the use of emotional signals, they don't have their own interior lives. "They are like actors that express emotions but don't have the emotion themselves," he said. However, Aibo-type machines can still be seen as the direct descendants of the wheeled "tortoises" developed by W. Grey Walter in the 1940s and 1950s. Steels built such robots using digital technology and Lego sets in the early 1990s, but in search of the next step turned to the linguistic concept of "representations". For example, a street can be blocked off physically with a roadblock but a "no entry" sign is a representation that carries the same weight. Representations are closely tied not only with social interaction but with the functions of the brain. Robotic resistance
This notion has met with resistance on both theoretical and practical levels. Some scientists, such as Rodney Brooks of MIT, have argued that intelligent behaviour doesn't need internal representations. And at this week's conference, others expressed disbelief that today's cameras and visual software make it impractical to carry out any real level of interaction with the world. Steels believes that technology is no constraint. "We don't need the full complexity of human vision, this can be built on any kind of sensory foundation," he said. As for the theoretical argument, he believes that sooner or later the field will have to stop modelling robots on an unrealistically limited view of humanity. "There is a danger in the field of viewing humans as machines, as automata, the way biology looks at humans as complex machines," he said. "Representation-making gives a rich view of people that is not covered by these behaviourist theories."

See the Hardware News Section for the latest update on everything from MP3 players and PDAs to supercomputing. Have your say instantly, and see what others have said. Go to the ZDNet news forum. Let the editors know what you think in the Mailroom.


You have been successfully signed up. To sign up for more newsletters or to manage your account, visit the Newsletter Subscription Center.
See All
See All