Research conducted by a professor at Monash University has highlighted the ethical dilemmas surrounding race and robots, calling for engineers to "absolutely" consider race when developing robots.
Speaking to ZDNet, Robert Sparrow, professor of Monash University's Department of Philosophy, explained that as engineers look to develop humanoid robots, they have a moral obligation to think about the potential -- and often unintended -- ethical consequences of doing so.
"I suspect most engineers want to build robots that look futuristic, and our ideas of the future is set on science fiction, which often turns out to be quite racist," he said.
See also: Ethical questions in AI use cannot be solved by STEM grads alone (TechRepublic)
"There is this gleaming white aesthetic that represents the future and that means when you're building a robot, that's what you want your robot to look like, but then that has a consequence. If people are going to locate that robot on a spectrum of race, I think they're likely to respond to it being white.
"There are some cases where people do very explicitly engineer the race of their robots … where people, for instance, have built robots that are obviously Arabic speaking to mobilise national pride. There are other cases where people build robots in the images of particular individuals. Some of the Japanese robots, for instance, are very obviously modelled on Asian women or men.
"But I think most people aren't really thinking about race when building robots. It's just when those robots go out into the world, we might ask, 'what race is this robot?'"
In his research, Sparrow examined images of 125 humanoid robots from the Anthromorphic Robot Database and found between 66-72% were likely to be racially coded as white.
Sparrow's research papers, titled Do robots have race? and Robotics have a race problem, suggest however, that the concept of race in robots could be avoided altogether if robots were not designed to replicate humans.
He pointed to how, for instance, if a washing machine were a robot, people would not associate it with a race, but if it were to invoke human-like features then it would likely attract race-related qualities.
"A lot of the functionality of robots is much easier to achieve if you stop trying to model them on human beings," he said.
"This desire to build a humanoid machine sometimes makes engineering harder but also there's these ethical dilemmas that arise."
Read: AI ethics governance: 7 key factors to consider (TechRepublic)
At the same time, Sparrow noted that it's important to pay attention to race in some circumstances, such as when robots are used for policing or in medical environments.
"In the paper, I give the example of a policing robot. If you see a lot of big white robots in a black neighbourhood, people are more likely to see those as an extension of racist policing," he said.
"Similarly, for medical robots, if [for] some reason you want to build a humanoid robot for medical applications, one of the things we know about race is they trust each other differently depending on race, so conversations go differently between people who are of the same race and different races. If you want your robot to build a rapport with patients or to speak openly with a robot, you need to pay attention to its race."
He added his research also found that unlike gender, race has not been as widely considered when robots are developed.
"People have thought about the voice of robots, they have thought a little bit about the sex, but there's been much less discussion about race," Sparrow said.
Looking ahead, Sparrow said he wants to explore the topic further.
"I'd like to work with human robotics interaction researchers to investigate when people do attribute race to robots, and what factors make a difference," he said.