The pace of innovation in robotics in recent years has been stunning, with robots performing many tasks requiring some degree of human intelligence, from assembly to driving cars to flying aircraft. Robots are also interacting with humans on an increasingly sophisticated level.
In fact, the pace of robot innovation is far outpacing any legal and moral implications that may arise from machine interactions. There hasn't been a lot of progress on this front since the time Isaac Asimov first published his "Three Laws of Robotics" in 1942:
- "A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- "A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
- "A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws."
As we rely on robots for more and more of the tasks of business and society, there needs to be a legal framework to address the legal and moral questions that may come up. For example, if a robot injures somebody, or if a questionable or ethically challenged decision is left to a machine. It's a wide open frontier, legally.
To start the process of building such a framework, the University of Miami School of Law announced it plans an inaugural conference on legal and policy issues relating to robotics. The event, dubbed seeks submissions for "We Robot" (a play on words on Asimov's I, Robot), will be held in Coral Gables, Florida in April 2012.
The aim of the conference is to host presentations on "reports from the front lines" of robot design and development, and "encourage conversations between the people designing, building, and deploying robots, and the people who design or influence the legal and social structures in which robots will operate."
Conference organizers seek to explore the role of robotics to examine how the increasing sophistication of robots and their widespread deployment everywhere from the home, to hospitals, to public spaces, and even to the battlefield disrupts existing legal regimes or requires rethinking of various policy issues.
Of course, hopefully things won't go too far the other way, and the robotics or artificial intelligence industry gets overrun with lawyers and mandates.
The call for papers is still out, but topics to be covered will likely include the following areas:
- Effect of robotics on the workplace, e.g. small businesses, hospitals, and other contexts where robots and humans work side-by-side.
- Regulatory and licensing issues raised by robots in the home, the office, in public spaces (e.g. roads), and in specialized environments such as hospitals.
- Design of legal rules that will strike the right balance between encouraging innovation and safety, particularly in the context of autonomous robots.
- Issues of legal or moral responsibility, e.g. relating to autonomous robots or robots capable of exhibiting emergent behavior.
- Issues relating to robotic prosthetics (e.g. access equity issues, liability for actions activated by conscious or unconscious mental commands).
- Relevant differences between virtual and physical robots.
- Relevant differences between nanobots and larger robots.
- Usage of robots in public safety and military contexts.
- Privacy issues relating to data collection by robots, either built for that purpose or incidental to other tasks.
- Intellectual property challenges relating to robotics as a nascent industry, to works or inventions created by robots, or otherwise peculiar to robotics.
- Issues arising from automation of professional tasks such as unauthorized practice of law or medicine.
This post was originally published on Smartplanet.com