Uncanny valley: when robots are too life-like, humans dislike them

Are human-like robots the way of the future? New research demonstrates that the human brain detects the mismatch and categorizes androids as "creepy."
Written by Andrew Nusca, Contributor

If you've ever seen the film Polar Express and thought that the characters, while obviously animated, were just one step too realistic, you're not alone.

That ambiguous feeling of creepiness refers to a specific phenomenon with a scientific name: the "uncanny valley." Simply, when an artificial agent becomes too human-like, real humans begin to like them less.

An international team of researchers led by Ayse Pinar Saygin of the University of California, San Diego is researching exactly why this occurs.

Scientists have known for years that, anecdotally, people respond positively to an agent that shares some characteristics with humans, such as dolls or cartoons or Wall-E. As the agent takes on more human-like traits, it becomes more likeable.

But there's a point where that trend reverses: once the agent both looks and acts like a human, but is discernibly different, it's said to fall into the "uncanny valley" of discomfort among humans.

According to functional MRI images taken by the research team -- which included scientists from Japan, France, Britain and Denmark -- this phenomenon is due to a "perceptual mismatch" between appearance and motion.

The team studied what is called the "action perception system" in the human brain -- that is, how the brain interprets and identifies human appearance or human motion.

The researchers gathered 20 subjects aged 20 to 36 who had no experience working with robots and hadn't spent time in Japan (land of androids, i.e. human-like robots). They then showed the participants 12 videos of Repliee Q2 -- an android -- performing ordinary actions such as waving, nodding, taking a drink of water and picking up a piece of paper from a table.

The participants were also shown videos of the same actions performed by the human on whom the android was modeled, as well as videos of the same actions performed by a "stripped" version of the android that revealed the underlying mechanics, underscoring its artificial nature. They were informed of which were human and which were not.

Their findings, measuring brain response: during the android condition, in the parietal cortex on both sides of the brain -- specifically in the areas that connect the part of the brain's visual cortex that processes bodily movements with the section of the motor cortex thought to contain "mirror neurons" -- there exhibited evidence of a mismatch: the brain reacted strongly when humanoid appearance and robotic action occurred simultaneously.

"The brain doesn't seem tuned to care about either biological appearance or biological motion per se," Saygin said in a statement. "What it seems to be doing is looking for its expectations to be met -- for appearance and motion to be congruent."

Their findings help inform the development of artificial agents, which continue to proliferate as technology advances. At the intersection of biology and technology, it's clear that there are rules to abide by -- for one, that life-like robots are not the ideal. (Unless they achieve T-1000 levels of mimicry, in which case we've got bigger ethical fish to fry.)

The researchers agree:

As human-like artificial agents become more commonplace, perhaps our perceptual systems will be re-tuned to accommodate these new social partners. Or perhaps, we will decide it is not a good idea to make them so closely in our image after all.

Their research was published in the Oxford University Press journal Social Cognitive and Affective Neuroscience.

This post was originally published on Smartplanet.com

Editorial standards