X
Innovation

Teaching language to robots

For a project scheduled to end in 2011, Plymouth University researchers will build two robots using software allowing them to interact with each other to exchange learned information like humans. The team will use language-learning techniques designed for children. According to The Engineer, the goal of the project is to teach concepts to robots including the meaning of words. As said the lead researcher, 'Robots still don't know the meaning of things. The only techniques we have at the moment are using mathematical tricks and statistics to produce more or less sensible replies.' These robots will be designed to encourage human interaction. They'll have a long neck and a face in place of a grip so it can look around or at an object from all sides. But read more...
Written by Roland Piquepaille, Inactive

For a project scheduled to end in 2011, Plymouth University researchers will build two robots using software allowing them to interact with each other to exchange learned information like humans. The team will use language-learning techniques designed for children. According to The Engineer, the goal of the project is to teach concepts to robots including the meaning of words. As said the lead researcher, 'Robots still don't know the meaning of things. The only techniques we have at the moment are using mathematical tricks and statistics to produce more or less sensible replies.' These robots will be designed to encourage human interaction. They'll have a long neck and a face in place of a grip so it can look around or at an object from all sides. But read more...

This research project is led by Tony Belpaeme, a Lecturer in Intelligent Systems at the University of Plymouth. Belpaeme is associated with the Robotics and Intelligent Systems in the Centre for Interactive Intelligent Systems.

Here is a description of these future robots provided by The Engineer. "There will be speakers, a microphone and two cameras in the robot's head, which Belpaeme said will be able to pick out humans in a room, make eye contact, track human gaze and interpret pointing gestures and correlate them with what is being pointed at. 'We are going to make the robot look cute and we are going to try to trick people into teaching the robot things just as they would a small child,' said Belpaeme."

In the beginning, such a robot will be like a learning kid. "'It will have the intelligence of a three- to four-year-old in terms of competence of understanding things but we are aiming at it having interactions of a two- to three -year-old. If we can replicate that, I would be over the moon.' The researchers hope to explore, with help from developmental psychologists' knowledge about child language acquisition, how children learn words in the early years of life and then implement 'tricks' that they use onto the robots."

Now, let's turn to a project page maintained by Belpaeme, How can robots learn the meanings of words? Here is a more thorough description of the project. "In this project we will build two robots that will learn the meaning of words through interacting with people, much in the same way that young children learn conceptual knowledge from hearing adults speak to them about objects, relations and actions. It takes children almost three years to master a few hundred words and related concepts, as long as the duration of this project. However, we could speed up the process of word-concept learning by using training more than one robot, thus reducing the training time needed, and then downloading the missing knowledge from one robot to the other. Such "telepathic" access to concepts is impossible for humans: we need to resort to pointing out examples of concepts and speaking about them, but direct transfer should be easy to arrange for robots."

As says this document, the project has two major goals. But let's focus on the first one which "is to study how a robot needs to behave in order to elicit conceptual knowledge from people. Therefore we will build a robot face, containing cameras and microphones, on a long articulated neck. The neck allows to robot to look around the room, but also allows it to scrutinise objects laid out on a table in front of the robot. The robot will be able to seek eye contact, engage in joint attention and interpret gestures related to concept learning. It will engage in activities, such as asking its human teacher to confirm a word or play a round of "spot the X", to check its knowledge and, if necessary, adapt it. The second major aim of the project is designing computer algorithms that efficiently learn concepts from interaction involving real-world scene and words. Children are particularly good at this, and the reason for it is that they use a number of constraints to help their learning. We want to program these constraints into our robot learning mechanisms."

This project, which starts on September 1, 2008 and will end on August 31, 2011, is being funded by the UK's Engineering and Physical Sciences Research Council (EPSRC) for an amount of £192,291. Here is a link to the details about this grant, "Linguistic and direct transmission of concepts in robot-human networks.

Here is a link to additional information about this CONCEPT project which will study how robots can acquire concepts using language and how conceptual information can be transferred between robots. If you're interested -- and qualified, please note that the project has two PhD positions available for students, one on human-robot interaction and another one on cognitive robotics.

Finally, let's look at the second major goal of this project described on this page. "We want to study the fast direct exchange of knowledge between robots, and we believe that we can reuse the aforementioned algorithms to allow robots to teach each other new concepts and words. The robots will use the internet as a medium to interact and are no longer limited by the slow real world to do "show and tell" teaching. Learning thousands of concepts might, instead of the years it takes children, now take only a few minutes."

Sources: Anh Nguyen, The Engineer, UK, July 28, 2008; and various websites

You'll find related stories by following the links below.

Editorial standards