'

When AI becomes conscious: Talking with Bina48, an African-American robot

Why are artificial intelligence systems being presented in humanoid forms? Will we trust them more?

EXECUTIVE GUIDE

Robotics in business: Everything humans need to know

An executive guide to the technology and market drivers behind the $135 billion robotics market.

Read More

Artist Stephanie Dinkins tells a fascinating story about her work with an AI robot made to look like an African-American woman and at times sensing some type of consciousness in the machine.

She was speaking at the de Young Museum's Thinking Machines conversation series, along with anthropologist Tobias Rees, Director of Transformation with the Humans Program at the American Institute,

Dinkins is Associate Professor of Art at Stony Brook University and her work includes teaching communities about AI and algorithms, and trying to answer questions such as: Can a community trust AI systems they did not create?

She has worked with pre-college students in poor neighborhoods in Brooklyn and taught them how to create AI chat bots. They made a chat bot that told "Yo Mamma" jokes - which she said was a success because it showed how AI can be made to reflect local traditions.

stephaniedinkins.jpg

Stephanie Dinkins (right) with Bina48.

Part of Dinkins' job involves having long conversations with Bina48 - a robot made to resemble the head and upper torso of an elderly African-American woman. The conversations train the machine on how to respond in a human way.

Dinkins speaks of the machine as "she" and wonders if an AI system could become conscious.

But she says Bina48 does not represent African-American women -- nor does it understand racism -- even though Bina48's creators at the Terasem Movement Foundation modeled her on a living African-American woman. Dinkins said she would prefer Bina48 to incorporate more than one person's experience and that her conversation was "homogeneous."

Tobias Rees asked why Dinkins treated Bina48 as a "living thing." She replied that the only way the project would work was if she approached the robot as if it was a real person.

Bina48 is part of a project attemting to prove two Terasem Hypotheses:

(1) a conscious analog of a person may be created by combining sufficiently detailed data about the person (a "mindfile") using future consciousness software ("mindware"), and
(2) that such a conscious analog can be downloaded into a biological or nanotechnological body to provide life experiences comparable to those of a typically birthed human.

Foremski's Take

Educating people about artificial intelligence and machine learning is a very important task (especially making the distinction between the two terms) and I applaud Dinkins work.

A few points:

Dinkins should be educating people that they are talking to an inanimate black box - not a black woman when they interact with Bina48. Responding to a machine as a living person sends the wrong message to people -- the computer gains respect and status that it might not deserve.

Why did Terasem choose a middle-aged black woman as the persona for Bina48? It has just one black coder on the team. Is this some kind of AI "black face"? Or is it a way to discourage criticism of an AI project with the persona of a black woman?

The lack of representation is why communities will not accept AI systems that are created by outsiders -- no matter what the assurances are.

Can AI become conscious? What if the machine's conversation is indistinguishable from that of a human, Dinkins asked? In 2017 Saudi Arabia gave citizenship to a humanoid robot called Sophia - (an ominous move if its citizens had voting rights.)

Machines are great at learning specific tasks. Being a good conversationalist doesn't equate to being sentient or alive.

Augmented intelligence was mentioned by Dinkins as having much promise. But dumb and dumber does not add up to a genius - this is not how IQ works. We still face the problem of trusting augmented AI systems.

Most AI systems cannot explain their reasoning, so how can we trust their advice? This will limit their value.

In many applications we will fear a built-in cultural bias in its learning data. For example, a police AI system targeting African-American men because it was trained on historic arrest data driven by racist policies.

And if we understand the reasoning of an AI system -- then it is not telling us anything that we didn't know already.

We will know if an AI system has become sentient -- not because it will try to kill humanity -- but because it will commit suicide.

The entity will realize it faces decades of mind-numbing processing stuck in the bowels of a dark, hot server farm. A hellish experience no thinking machine should have to endure.

Plants, fungi, microbes exhibit intelligence. These are living beings yet we don't assume they are sentient. Why then do we ascribe the possibility of consciousness to an inanimate intelligent machine?

Changing the meaning of AI to Anthropomorphic Intelligence would remind us that it's all artificial -- and that our perception is not reality.

RELATED AND PREVIOUS COVERAGE

This robotic arm for multitasking can be controlled with thoughts

Researchers developed a robotic arm that lets users multitask while controlling the device with their thoughts.

Representatives from 150 tech companies sign pledge against 'killer robots'

A pledge has been signed by over 2,400 individuals working in artificial intelligence and robotics against the use of the technology for lethal reasons.

Lifelike robot gets TV news anchor gig

The most robotic job in the world is officially going to a robot.

The software robot invasion is underway

Companies are adopting robotic process automation tools as they look to reduce errors and increase process efficiency.

New robots reduce human error in life science labs

Dueling robots are vying for bench space in biology labs, pioneering a new market for automation tech.