X
Innovation

Researchers correct robot mistakes with their minds

The system allows users to correct AI mistakes using nothing more than their brains.
Written by Charlie Osborne, Contributing Writer
screen-shot-2017-03-03-at-12-41-46.jpg
MIT CSAIL

Researchers have developed a new system which allows human operators to correct robotic mistakes with only the power of their minds.

The world of artificial intelligence (AI) and machine learning (ML) has expanded as of late. We now see AI in everything from Facebook's facial recognition system to voice assistants and machine learning in cybersecurity and big data analysis. But as these systems are designed to emulate human decision-making and thought processes, mistakes can happen.

When an AI decision-maker chooses the wrong course of action, correcting these decision pathways can be an arduous process.

However, researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) and Boston University want to change that.

On Monday, the research team revealed the creation of a feedback system which allows for mistakes to be corrected in real-time, with no tools other than sheer brain power through the use of an electroencephalography (EEG) monitor which records brain activity.

In a paper due to be presented to the IEEE International Conference on Robotics and Automation (ICRA) in May, the CSAIL team, supervised by CSAIL director Daniela Rus and BU professor Frank Guenther, explain how a humanoid robot called "Baxter" was able to tap into the researcher's brain waves to 'know' whether or not an action was correct.

When Baxter performed object-sorting tasks while being watched by a researcher, the robot would pick up on simple brain signals called "error-related potentials" (ErrPs) which are generated when we notice a mistake.

In similar experiments, it would often take the operator's full concentration to send out these signals by "thinking" in a prescribed way that robots are able to recognize; for example, a researcher might look at one of two light displays, each of which would correspond to a different task for a robot to execute.

However, Rus' team wanted to make the process more natural and one that requires less concentration by creating machine learning algorithms that are able to classify brain waves in the space of 10 to 30 milliseconds.

The team's system is able to use ErrPs alone to work out if a human operator agreed with Baxter's decision-making.

"As you watch the robot, all you have to do is mentally agree or disagree with what it is doing," says Rus. "You don't have to train yourself to think in a certain way -- the machine adapts to you, and not the other way around."

The signals are very faint and if Baxter isn't entirely sure of an action, the robot triggers a query for more information. The system is not yet able to fix so-called "secondary errors" like this yet, but the scientists remain hopeful that when it eventually can in real-time, decision accuracy could reach up to a total of 90 percent.

Baxter and the algorithms the robot relies upon are still in the early stages of development. However, CSAIL believes that in time the system could extend to multiple-choice tasks and may even, one day, advance to be of use to people who cannot communicate verbally.

"Imagine being able to instantaneously tell a robot to do a certain action, without needing to type a command, push a button, or even say a word," Rus says. "A streamlined approach like that would improve our abilities to supervise factory robots, driverless cars, and other technologies we haven't even invented yet."

The most exciting, innovative MIT projects in 2016

Meet Kaspar, the robot designed to help autistic children:

Editorial standards