X
Innovation

Beware the Midas touch: How to stop AI ruining the world

If we have any hope of controlling AI systems, we need to clearly tell them what they should and shouldn't do, warns expert Nick Bostrom.
Written by Danny Palmer, Senior Writer
king-midas-istock.jpg

King Midas learned the hard way what happens if you don't specify exactly what you want. Image: Getty Images

The emergence of general artificial intelligence could be as significant for humanity as the agricultural or industrial revolutions. But humans need to take steps early on to make sure that these AIs are built in a way which makes them helpful rather than harmful.

According to Professor Nick Bostrom, a leading philosophers on artificial intelligence and founding director of Oxford University's Future of Humanity Institute, there's a way in which humans can avoid becoming slaves to machines of superior intelligence: by designing from the very beginning to ensure they're going to act in the interest of the human race.

This doesn't mean we need to "tie its hands behind its back and hold a big stick over it in the hope we can force it to our way" but rather that we must "build it in such a way that it's on our side and wants the same things as we do".

This, Bostrum said, should provides artificial intelligence developers with a scalable control model which is effective at ensuring that AI will do what we want and not, for instance, rise up in revolution against its human overlords. Effectively, AI should be designed from the outfit to be benign.

Nonetheless, this still creates problems of determining what an AI should want and believe. It is tricky to properly define what that is when humans can't even agree on a vast swathe of philosophical and ethical constructs amongst themselves.

"It's hard to specify in human language, like moral philosophers have tried, but with AI we have to take it a step further with machine-based coding that is somehow implementing some sort of control criteria and what happiness is, or justice, or beauty, pleasure, ethics and other human values," says Bostrom.

Even if you can code intelligent machines to objectively believe certain things and act in a certain way, the design needs to account for absolutely everything -- because if something is left out, it could go badly wrong. "If that objective function omits to include some parameters, then actually you tend to find that it often gets set to extreme values".

To see an example of what goes wrong if you omit or don't think of certain parameters, Bostrom suggested the myth of King Midas, who wished for everything he touched to turn to gold, should be heeded.

"He wanted everything he touches to be turned into gold. Then when he touches his daughter, she turns into gold, when he touches his food, it turns into gold. He forgot to put in the exceptions: everything I touch turns to gold unless it's food, a person and so on," he explained.

"Configuring is difficult, because if we're going to rely on putting in a powerful control function, we better make sure it does what we ask," he said.

It presents a similar problem to the 'paperclip theory' where an AI is coded with the sole goal of producing paperclips. When given such a specific goal and no rules about what it should not do, the AI could eventually harm objects or people just to continue building paperclips, with no thought to the damage it may be doing.

Read more on artificial intelligence

Editorial standards