Robot Ethics: Book review

Robot Ethics: Book review

Summary: Now that robots have moved from science fiction to imminent reality, this book considers the ethical and other issues surrounding the increasingly smart machines we may soon be sharing our lives with.

SHARE:
TOPICS: After Hours, Reviews
3

If the first thing that pops into your head when you read the title Robot Ethics is science fiction writer Isaac Asimov's Three Laws of Robotics, then you're like many of the rest of us. The Laws were a storytelling device that Asimov adopted so he could write stories exploring the possible consequences of having non-evil robots share living space with humans — at a time when any real hope of such a thing was decades off, at least. We may now be on the verge of the real thing, from Roombas to caretaker robots looking after children and the elderly in Japan. And if there's one thing we know it's that there isn't any realistic way of turning Asimov's laws into functioning computer code. In fact, a lot of the things we'd like robots to be able to do reliably — such as respond proportionately when it's deployed in a war zone — are simply not things we have any idea how to code.

robot-ethics-book

Robot Ethics considers this sort of problem, as well as issues regarding robot lovers (Blay Whitby) and prostitutes (David Levy), humans' ability to fall into emotional dependence upon even the most machine-like of machines (Matthias Scheutz), robot caregivers and the ethical issues they pose (Jason Borenstein and Amanda Sharkey), whether there can be such a thing as a "moral machine" (Anthony F. Beavers), and the problem that comes up so often among optimistically futurist roboticists of whether at some point robots will deserve human rights (Colin Allen and Wendell Wallach).

I have to give these authors credit here: they are not just speculating about whether robots can become real people, but considering problems of liability. That's a good thing, because this is traditionally the point at which my inner biological supremacist asserts itself: who cares whether robots should have rights? Let's focus on the maltreatment so many humans have to live through first, OK? More practically, that sort of problem is a distraction from the very real opportunities that robots will present for invading their owners' privacy, as Ryan Calo argues in his chapter, 'Robots and privacy'; your "plastic pal who's fun to be with" is going to collect an amazing amount of data about you just in the ordinary course of organising your life — data that our increasingly surveillance-happy societies will surely be interested in.

"Probably the biggest moral conundrum posed by robots is the human propensity for anthropomorphising."

In the end, probably the biggest moral conundrum posed by robots is the human propensity for anthropomorphising: some (how many?) people treat their Roombas like family pets — and a robot could hardly be dumber than a Roomba. A smart robot designed to simulate real emotional response is infinitely more dangerous in terms of suckering us into cuddling up to it and telling it our innermost secrets. If some of the worst scenarios imagined in this book ever come to pass, it may be like the whale in The Hitchhiker's Guide to the Galaxy who, seeing the ground rushing toward it at high speed, asked optimistically, "I wonder if it will be friends with me?" That would be us, cast as the whale.


Robot Ethics: The Ethical and Social Implications of Robotics
Edited by Patrick Lin, Keith Abeney and George A. Bekey
MIT Press
386 pages
ISBN: 978-0-262-01666-7
Price £31.95, $45

Topics: After Hours, Reviews

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

3 comments
Log in or register to join the discussion
  • Nice book, I'll have a look at that.

    The subject has fascinated me since I was a child and only dreaming of building robots.

    However I'm a bit concerned about the discussion on the moral issues of robots in a war zone - particularly about threat response.

    Thats getting far away from Asimov and into Terminator territory - I really hope that intelligent machines will never be asked to kill humans - and pitching them in battle against each other is a pointless waste of resources, if nothing else. (It might be entertaining though, which raises another moral question...)

    Machines and animals do not understand war, its an entirely human concept, and I personally think that as we birth an entirely new race as companion and workforce for us, we dont have the right to abuse it by forcing these concepts upon it in the first place.

    It also begs the question of why we still war ourselves, when we have the power to create a new race to subjugate and control - one that wouldnt complain and in fact, almost worship us, in return for its continued existence.
    SiO2
  • Morals and emotions

    Its true that at the moment we dont -cant- have a moral machine, certainly not one with human morals.

    Part of the problem is the fact that morals arent hard-coded and vary from group to group of humans - what is moral and indeed normal for one group can be repellent to another. Some cultures eat dog meat, a practice I find barbaric as a westener, but then I do love a steak, and eastern culture forbids that.
    Location, and a behavioural database that includes these differences can solve that, but theres another problem.

    Morals also vary between individuals. An intelligent companion machine with a developed 'psyche' would be hard-coded not to steal for example, but if its owner had no qualms about sticky fingers, then the machine is left with a problem - please its owner, or break its conditions. It doesnt have the third option of refusing, because it will be designed not to.

    This is one area we need to look at carefully. Rights come with free will, or are bestowed. We can easily bestow them the right to not be misused. The right, even, to be part of any decision that affects them greatly just the same as us.

    But the right to say 'No' is one humans have trouble with giving one another, let alone another species...

    Robot ethics dont yet exist; its more a question still of human ones I think.
    SiO2
  • Ethics for robots, or for us?

    The mention of the Three Laws by Asimovs brings up a remark that Asimov had his character Susan Calvin make in the original "I, Robot" (the chapter called "Evidence"), that the Three Laws are also a good example of an EXTREMELY moral human being: we protect our own survival, and try to avoid harming others; some of us risk our lives to obey laws and orders in order to protect others; and the very best and most courageous of us (in Christian theology, Jesus and the saints) deliberately allow ourselves to be killed to save others.

    But even Asimov had to make a few exceptions to the Three Laws. In "The Naked Sun" the visiting detective from Earth points out that robots with the First Law could be tricked into harming a human being: one robot could be ordered to prepare a poisonous solution for a non-human-involved experiment, then another robot could be told the solution is plain water, and ordered to serve it to a victim. In one chapter of "I, Robot" Susan Calvin has the task of separating the ONE robot without the "or allow a human being to come to harm" part of the First Law from a shipment of identical robots; it was specially developed to work with scientists who knew the risks of working with radiation, and needed to be left to their own judgement of their safety.

    And most shocking of all, in one of his last novels, "Robots and Empire", a world whose people consider themselves superior to all other humans, in fact a separate species, has built robots who obey the First and Second Laws ONLY for humans who speak with the accent of that world. They have been programmed to destroy human-appearing creatures that so not speak properly, because by definition they are not "human". That sounds like the way robots would be programmed by many people currently living on our planet!
    jallan32