Sentience: The next moral dilemma Pt II

Allowing sentient beings legal protection and the right to choose

So, in summary, a sentient robot could in theory wander into a place of worship, light a candle and get on with praying with no fear of abuse or intolerance from religious leaders. But what would happen if someone, or something under the control of someone, harmed it deliberately? Would we be ready to afford this new race legal protection that could send a human to jail if it were deliberately harmed?

"Yes, I think we probably could" says Robin Bynoe, high-tech lawyer and partner at London law firm Charles Russell. "What is most likely to happen is a gradual move toward a structured process of liability that would provide this new being with legal protection. For example there will at some point have to be a legal protection against, say someone pulling out its wires." Bynoe explains: "If pain is experienced by the robot it would have to have similar laws that protect dogs or cats."

Bynoe is confident the legal system would be flexible enough to cope with the arrival of a new man made species "in the fullness of time." "I think the legal system would be able to cope with a robot that can think and feel, although it would have to demonstrate these abilities," he says.

He does not like the argument that a sentient robot will necessarily be a bipedal mechanism created to look like a man, and rather believes in something more likely to resemble HAL of 2001. He nominates Bill Gates' house as a possible candidate for the world's first sentient artificial being but concedes it will be very difficult for even an intelligent house to convince the world that it has the right to any moral say.

In all likelihood the arrival of a new sentient species born of robotics will probably present itself as a droid performing a certain task that man cannot, or would rather not do. In Ridley Scott's Blade Runner, the Nexus 6 series of replicants got fed up of being slaves of humans. No wonder: species-ism played a major part in Scott's scenario with humans slandering the replicants as "skin jobs" and forcing them to work in the service of man at pain of "retirement" should they resist.

Imagine the problems our grandchildren would face if the robots suddenly downed their tools and demanded liberation. As Bynoe puts it: "No one is going to pay good money for something that sulks, are they?"

This step to self awareness marks a significant point in our sentient being's life. At the point where it decides it wants to leave its owner and do its own thing, there could be trouble.

Steve Grand, the engineer who created AI interactive game Creatures starring the adorable Norns, has been dubbed the "most intelligent man in Britain" and believes that once his robot orangutan Lucy achieves consciousness, she should be afforded the same rights as any human. He does accept however, that species-ism is a threat to how she will be treated. That and her appearance. Asked if it would matter whether she were bipedal or a sentient house, Grand laughs, but acknowledges that "we are incredibly anthropomorphic, aren't we?".

Time, says Grand, will be the key to man and artificial sentient beings living together: "It will take a very long time to get used to the idea, and that's not a bad thing. We simply could not take in the sudden arrival of a new intelligent species."

But what if these beings do not want to do man's bidding. What if they do not want to do the jobs they were designed to do? "I think that these machines will be different from us by virtue of the fact that they will enjoy what they have been designed to do... what they were made for. So if you want a miner or maybe a [robot] horse to pull a plough, it will enjoy doing it and enjoy doing it well."

While Grand agrees that a sentient machine may toy with the idea of choice, and thus whether or not it actually wants to do the job it was designed for "it will probably be at a lesser extent than you might think. We are much more stuck with our motivations than we like to believe we are. The reason we hate working nine to five is that we weren't built for it."

"We were built for life in the jungle and an awful lot of human stress in the 21st Century is due to the fact that we don't live in the jungle anymore. We are trapped by our emotions, just as they will be trapped by their emotions, but they will have evolved for a certain job, not for life in the jungle, that will be the status quo as far as they are concerned," he adds.

But the term "sentience" denotes empowerment not only of the mind, but of the will. It gives the power to choose and the will to exercise that choice.

The question of morality -- whether an intelligent thinking machine could somehow assert its individuality and be granted the right to live an autonomous and free existence -- is surely more important than the question of whether or not it can be done. Will man be willing to share this Earth with a creature that perhaps looks different to us, is more intelligent than us and stronger than us?

Consensus within the scientific community does not exist but those who believe sentience is achievable suggest it could be as much as a century away. We should be thankful we have that much time ahead to consider the moral implications of creating another sentient.

"Painful isn't it, a life of fear? That's what it is to be a slave," Roy, Nexus 6 Replicant, Blade Runner.

Take me back to Pt I/ Sentience: The next moral dilemma.

In ZDNet's Artificial Intelligence Special, ZDNet charts the road to sentience, examines the technologies that will take us from sci-fi to sci-fact, and asks if machines should have rights.

Have your say instantly, and see what others have said. Click on the TalkBack button and go to the ZDNet News forum.

Let the editors know what you think in the Mailroom. And read what others have said.