X
Innovation

Robots and AI: Should we treat them like pets, or people?

Both law and artificial intelligence experts are trying to answer a simple question: who is responsible if a robot does harm?
Written by Danny Palmer, Senior Writer

Video: Like owners and pets, are you to blame if your robot hurts you?

Who is accountable for an artificial intelligence or robot which performs an action that brings harm to people?

That action might be accidental but it's one of many questions society might need to ask itself about the autonomy and accountability of AI when more advanced forms of it such as driverless vehicles -- likely to be the first robots we learn to trust -- drones, and even military weapons become more widely deployed.

AI and legal experts are attempting to figure it out, but there's no simple answer.

Speaking on a British Academy panel about robots and the law at The Royal Society, one expert suggested that the answer could be right under our noses. It's dog owners -- not the breeders who sell them -- who have legal responsibility for the actions of their pet, and the same could apply to robots.

"We have a very large dog; it weights 65 kilograms and is an Italian Mastiff, so it could be a dangerous dog. As the owners of that dog, we accept that we're responsible for it and that's an important principle in robotics; there needs to be a responsible person for each robot," said Professor Patrick Haggard, professor of cognitive neuroscience at University College London.

A responsible dog owner will carefully ensure that their dog can be traced or accounted for, and the same should apply to robots, he suggested.

"The dog is chipped and registered so it can be tracked and I think that's an important principle in terms of robotics, you need to know who these things belong to."

But given that people buy a robot vacuum cleaner because they don't want to bother doing it themselves, is it fair to blame the owner when something goes wrong? Isn't it the fault of the maker?

"It doesn't make sense to make the differentiation," said Professor Susanne Beck, professor for criminal law and law philosophy at the University of Hanover, referring to the division of responsibility between an AI's user and creator. "Because the point of having these machines is that we don't want to be responsible for the mistakes it makes."

That means that the whole concept of accountability has to be dealt with in a "completely different way", she explained, which involves rethinking liability and responsibility because "we can't just say someone has made a mistake and ignore the consequences".

The answer could involve some sort of electronic personhood, what Professor Beck described as a "pragmatic solution".

"We'd not see the machine as a person, but it's really a legal construct to deal with them, that's what it is. I'm not sure if that'll be enough for society, but that's a starting point," she said.

Nonetheless, others argue that a portion of the responsibility must still rest with the developer. Because even if the AI isn't unintentionally doing wrong, it could trained the wrong way after being made public, as Microsoft's AI chatbot Tay infamously demonstrated when learning via public interaction made it racist.

Alternatively, if an artificial intelligence only has limited datasets to learn from, it could cause problems, such as facial recognition only being trained with images of people from one particular group.

"The programming could be for all intents perfect," said Roger Bickerstaff, partner at Bird & Bird, an international law firm with a focus on technology.

"But if the software is learning from the datasets it's exposed to, it may be inferring false conclusions from that or it may be being fed the wrong data in the first place," he said.

Nevertheless, it still remains a difficult area to judge. "How would you separate the responsibilities of the programmer, from the owner, given robots learn from the environment around them?" said Haggard.

In that case, maybe taking guidance from how people need to be accountable for their dogs is a step forward in the right direction, at least for now. But discussions on AI and accountability will continue as machines become more intelligent, and play more of a role in our lives.

READ MORE ON ARTIFICIAL INTELLIGENCE

Editorial standards