A residential fellow at the Stanford Law School's Center for Internet and Society (CIS) is raising concerns over the potential legal questions facing the emerging field of personal robotics.
M. Ryan Calo (Credit: L.A. Cicero)
M. Ryan Calo, who is currently writing a paper on the subject, is looking at the possible legal ramifications robots and robotics manufacturers might face in the event of claims of personal injury and property damage. He says that the issues touch on criminal and civil rights laws as well.
"I worry that in the absence of some good, up-front thought about the question of liability, we'll have some high-profile cases that will turn the public against robots or chill innovation and make it less likely for engineers to go into the field and less likely for capital to flow in the area," said Calo.
The consequence of a flood of lawsuits, he said, is that the United States will fall behind other countries, like Japan and South Korea, that are also at the forefront of personal robot technology.
Skeptical of the forewarning? Consider this. UN statistics project that millions of personal or service robots will enter the home this and next year, and ABI Research predicts personal robotics will be a multi-billion dollar industry by 2015.
And robot-caused litigation is not uncommon. In 1999, a woman settled a lawsuit with her employer when a mail delivery robot allegedly pinned her against a wall, fracturing her toe and causing other injuries.
The applications for personal robots is increasing (e.g., physical or psychological therapy, education, eldercare, exploration, hostage negotiation, rescue, entertainment, and home security) as people become more comfortable with them. Some predict that at some point, someone will sue for the right to marry their robot.
"Don't laugh," Paul Saffo, a technology forecaster and visiting scholar at Stanford's Media X project, said during a recent panel discussion held at the Law School to address the legal challenges surrounding robotics. "People get emotionally attached to their robots."
So the questions that may arise include: Who will be to blame if a robot-controlled weapon kills a civilian? Who can be sued if an autonomous vehicle takes an unexpected turn into a crowd of pedestrians? And who is liable if the robot you programmed to bathe your elderly mother drowns her in the tub?
In a post on his blog, Calo asserts that his position is not that manufacturers should enjoy total immunity for the personal robots they build and sell. Instead, he's exploring "the best legal infrastructure to prioritize safety and compensate victims while preserving the conditions for innovation and investment." To do that, he's tentatively proposing "to take a page from the thin book of Internet Law".
Websites enjoy generalized immunity under the Communications Decency Act for user content and filtering. It has permitted web services to proliferate and thrive. According to Calo, a “section 230 for personal robotics” could limit legal risk to the manufacturer where the owner programmed, instructed, or “taught” the robot to take the action at issue or where the harm resulted from a valid safety mechanism.
"The system is imperfect—it's hard to tell who the publisher is for some content, for instance, and the availability of anonymity blocks redress in some instances—but it’s still no coincidence that Google, Facebook, MySpace, LinkedIn, and other web giants are all U.S. companies," Calo said.
He also recommends immunity for harm attributable to safety features, drawing an analogy to cars:
Cars are relatively well understood, with standardized components and interiors. Thus, it may make sense to hold today’s manufacturers accountable for “aggressive” airbags that cause needless injury. But cars developed as consumer products a hundred years ago, prior to robust product liability laws and industry standards. Personal robots may not survive similar treatment.