Steels's work deals with machine intelligence but it's a fundamentally different view from that embodied in the famous "Turing test". According to the Turing theory, a human-like intelligence has successfully been created when a human can't tell the difference between a conversation with the artificial intelligence and a real one. "I think the Turing test is a bad idea because it's completely fake," Steels said. "It's like saying you want to make a flying machine so you produce something that is indistinguishable from a bird. On the other hand, an aeroplane achieves flight but it doesn't need to flap its wings." Similarly Steels believes that machines can evolve intelligence through interaction with one another and with their ecology -- but this synthetic intelligence it is unlikely to bear much superficial resemblance to human intelligence. In one sense, Steels joked, the Turing test has already been passed -- by the Aibo. He demonstrated with a video clip where an Aibo approached a dog eating a piece of meat and was treated just like another dog -- it was attacked. He noted that while entertainment robots can interact with humans -- and particularly children -- through the use of emotional signals, they don't have their own interior lives. "They are like actors that express emotions but don't have the emotion themselves," he said. However, Aibo-type machines can still be seen as the direct descendants of the wheeled "tortoises" developed by W. Grey Walter in the 1940s and 1950s. Steels built such robots using digital technology and Lego sets in the early 1990s, but in search of the next step turned to the linguistic concept of "representations". For example, a street can be blocked off physically with a roadblock but a "no entry" sign is a representation that carries the same weight. Representations are closely tied not only with social interaction but with the functions of the brain. Robotic resistance
This notion has met with resistance on both theoretical and practical levels. Some scientists, such as Rodney Brooks of MIT, have argued that intelligent behaviour doesn't need internal representations. And at this week's conference, others expressed disbelief that today's cameras and visual software make it impractical to carry out any real level of interaction with the world. Steels believes that technology is no constraint. "We don't need the full complexity of human vision, this can be built on any kind of sensory foundation," he said. As for the theoretical argument, he believes that sooner or later the field will have to stop modelling robots on an unrealistically limited view of humanity. "There is a danger in the field of viewing humans as machines, as automata, the way biology looks at humans as complex machines," he said. "Representation-making gives a rich view of people that is not covered by these behaviourist theories."