In its earliest days, when it needed a definition because there was little to point to as exemplars, science fiction was defined by the inclusion of hard science content. That is, you asked "what-if": What if night only came once in a thousand years? What if an intelligent indistinguishable-from-human android and a human joined together to solve crimes? But in telling the stories that such speculations inspired, you were not allowed to violate known science -- because otherwise it was fantasy, not science fiction.
Isaac Asimov, being the creative genius that he was, distilled these ideas into the short story Nightfalland the series of books starring R Daneel Olivaw and a human companion whose name, tellingly, I can't remember (it was Elijah Baley -- Ed).
In 2012, Ryan Calo and Michael Froomkin -- law professors at the Universities of Washington and Miami respectively -- sensed that robots were at approximately the stage of the internet circa 1988, and began to think about how to preemptively create good policy about them. Where, they asked, were the legal conflicts going to be? What new laws will be needed, what existing laws can be adapted, what metaphors will apply? When police question your robot butler, is that covered by the First, Fourth, or Fifth Amendment? Is it more like searching your filing cabinet or interrogating your spouse? If computer diagnostics become statistically more often correct than doctors, at what point do doctors using their own judgement become guilty of malpractice?
A stitch in time
At the root of such speculations is a single main question: how can we avoid spending 25 years arguing and rearguing the same points over and over again because an apparently small but wrong decision at the beginning creates havoc when the technology scales up to billions?
Robot Law, edited by Calo, Froomkin, and University of Ottawa professor Ian Kerr, is a collection of academic legal papers, largely drawn from the first four annual We Robot conferences, which they launched in 2012 to inspire scholarship to consider and suggest solutions for the likely areas of legal conflict.
Many of these papers are not all that science fictional: governing by algorithm and automated consent; the governance of robotic weapons; and how to allow for the effects of anthropomorphisation. While the papers vary in accessibility to laypeople, all are provocative. If nothing else, readers will come to understand how genuinely complex the programming and deployment of robots -- sometimes characterised at the conference as "iPhones with chainsaws" -- will be.
In their discussion of the Open Roboethics initiative, A Jung Moon et al. discuss how to manage sharing an elevator between a building's human users and a large, autonomous delivery robot. The discussion is carefully thought out and considers how humans and the robot can negotiate who has precedence. Yet it's utterly divorced from real life, where humans will scramble into the elevator at the last moment, climbing on the robot if they have to. The trouble with robots, as these papers and the conference discussions show, is going to be people.