X
Innovation

AIs are getting smarter, fast. That's creating tricky questions that we can't answer

AIs don't have human-level abilities yet, and they might never have them. But there are questions of responsibility, rights and moral status that we still need to talk about today.
Written by Jo Best, Contributor

Today, artificial intelligence (AI) covers a smart but limited set of software tools. But in the future, as artificial intelligence becomes more and more complex and ubiquitous, we be could forced to rethink the rights and wrongs of how we treat AIs – and even how they treat us. 

Currently, AIs are narrow in nature, performing tasks like image recognition, fraud detection, and customer service. But, as AIs develop, they will become increasingly autonomous. At some point, they're likely to do wrong. Who's really at fault when AIs make mistakes is a question that's set to trouble businesses and excite lawyers as they struggle to work out who could, and should, be held responsible for any resulting harm.

Today, in most cases of problems caused by AIs, it's obvious where fault lies. If you buy an AI and run it out of the box, and it does something terrible, it's probably the manufacturer's fault. If you build an AI and train it do something terrible, it's probably yours. But it won't always be so clear-cut.

SEE: Ethics of AI: Benefits and risks of artificial intelligence 

The complications begin when these systems acquire memories, and develop agency -- where they start to do things that a manufacturer or a user never planned or wanted them to do.

"That's where we have this gap in responsibility. There will be interesting cases like that within the next 10 years that fall in that middle ground, where it's unclear who's to blame for real harm that was committed in the world," says Christopher Potts, faculty affiliate at the Stanford University's Stanford Institute for Human-Centered Artificial Intelligence.

One option to prevent an AI from developing in a way not sanctioned by the manufacturer is to build them in such a way that any decisions made by the AI must be explainable to humans. The risk here is that by making AI entirely explainable to people, we may miss out on a huge range of benefits. 

"We have a problem in that we don't understand what these models are going to do – no one has complete analytic control of them," says Potts. "Of course, people are working on explainability and things like that, to try to introspect and look inside the models and figure out why they behave the way they do or make certain decisions. But I predict that our ability to do that introspection will always be outpaced by our ability to create ever more powerful models that do more impressive things. So there'll always be this gap."

We may just have to accept that we won't always understand why AIs do what they do and live with that uncertainty – after all, we do the same for other humans.

Over time, AIs may become so sophisticated that they will be considered legally and morally responsible for their own actions, whether we understand them or not. In law, it's already possible for non-human entities to be held legally at fault for wrongdoing through what's called corporate personhood: where businesses have legal rights and responsibilities in the same way people do. Potentially the same could one day apply to AIs.  

That means, in future, if we can find AIs guilty of a crime, we might even have to think about whether they should be punished for their crimes if they don't understand the rights and wrongs of their actions, often a threshold for criminal liability in humans. 

Questions of punishment also bring with them questions of rights, and whether AIs have any that could be infringed by the way they're penalised. But for some, even the idea of discussing rights for AIs at a time when human rights are still not universal for humans – and AIs' abilities are decades away, at best, from matching humans' – might seem like ethics running ahead of technology by some distance. 

SEE: What is neuromorphic computing? Everything you need to know about how it is changing the future of computing

For Peter van der Putten, assistant professor of AI at Leiden University and director of decisioning at Pegasystems, questions like these – which are often focused on imaginary scenarios and levels of AI ability that are more science fiction than reality – take away from the more present-day challenges of artificial intelligence. 

"I think questions of morality and ethics are important. But I would almost argue the case that it's indeed too early to think about giving AIs rights themselves, because when we talk about AI that's taking over the world, becoming totally autonomous at some point in the future, or when the singularity happens, we're kind of ignoring that the future is already happening today," he says. 

AI is already being applied at a very large scale, and its use should happen in a transparent, explainable, trusted, unbiased manner, which is not the always the case, van der Putten said. Questions have been raised about the impact of biased AI on everything from healthcare to policing, and spotting and tackling those biases is already challenging, even with the relatively simple AI systems of today. 

"Before we even start the call to contemplate giving rights to AI in some distant future, we should solve the problem or the problems and also grab the opportunities that we have with AI today," van der Putten says.

An AI with human-level intelligence is a distant prospect, but that doesn't mean we can park moral questions until it arrives. Moral status doesn't necessarily depend on levels of intelligence. That means the rights of AIs might merit more consideration long before they reach the same IQ levels as us. 

"More moral status is gradable. It may be that AI won't get personhood until very far down the road. But AIs might acquire all sorts of status very soon. So, for example, they might become conscious very soon or they might start to be able to feel pain soon. And so, for each of those steps, we need to make sure that we don't mistreat the AIs depending on what moral status they have," says S. Matthew Liao, director of the Center for Bioethics and affiliated professor in the Department of Philosophy at New York University.

And in a distant future, when AIs gain abilities that outstrip our own, they may have interests and rights we hadn't even thought of. Academics Nick Bostrom and Eliezer Yudkowsky, for example, raised the question of whether an AI that experiences time differently to humans have a right to control its subjective experience of time (which has implications for those that supply the hardware that AI runs on – if an overclocked machine would slow down its perception beyond the acceptable range, the AI might be entitled to different hardware).

SEE: The algorithms are watching us, but who is watching the algorithms?

As AIs grow in moral status, there's also another moral concern that we might need to deal with. If AIs have greater levels of intelligence and sentience than us, could that ultimately mean that they deserve more moral status than us – if we had to choose between saving an AI and a person, would we have to choose the AI?

Yes and no. One argument against such equivalence is that we don't give greater moral status to smarter humans, or to people with greater levels of morality – we just assume that every adult has the same moral status, and so should have the same rights and be treated equally. If there's a flood and you could only save one person, you'd probably consider other factors than intelligence before choosing who to save. That means even if AIs' intelligence outstrips ours, it doesn't necessarily follow that their new level would grant them greater moral status than us – there's an argument that, as we've all cleared the same bar of intelligence, we should all get the same consideration.

The counterargument is that as AIs develop new abilities far beyond those humans have, and potentially beyond the limits of human imagination, that those abilities could be so advanced they could merit granting AIs greater moral status.  

A lot of philosophers actually worry about the question of whether AIs will eventually have greater moral status than humans, greater than even personhood, says Liao. "And the answer is although I think that it's possible that they could have greater moral status, it's not going to be because they have greater intelligence, greater emotions or greater morality, it's because they're going to have different attributes. I don't know what those attributes would be...  but they could be something that's quite special, and such that we should recognise them as moral agents that are really deserving of more protection," he says.

Editorial standards