Garry Kasparov may have famously lost to IBM chess computer Deep Blue, but that doesn't mean he thinks the rise of artificial intelligence is necessarily a bad thing.
The 1997 encounter between Kasparov and the supercomputer was the first defeat of a reigning world champion by a computer under tournament conditions, and is seen as a significant milestone in the evolution of computing -- and by many as an ominous sign that artificial intelligence (AI) was rapidly overtaking human capabilities. Over the last two decades, the concerns about AI -- in particular, the abilities of computers to replace humans in the workplace -- have only grown.
While Kasparov tells me that game is 'water under the bridge' now he also reminds me that he had beaten Deep Blue the year before.
"Analysing the games you can see Deep Blue was not that impressive by modern standards. I was probably the stronger player," he says. "But who cares? It's about winning the game, it's about making fewer mistakes and machines are much better at surviving under pressure," he says, speaking to ZDNet at Web Summit in Lisbon.
However, the game did lead him to realise that there was no real future for human-versus-machine competition.
"Machines will get better and better because humans are poised to make mistakes, the machine has a steady hand," he says. Since his defeat the gap between humans and computers has only grown wider; he notes, pointing out that for chess, the difference in capabilities between the top chess computers and reigning chess world champion Magnus Carlsen is now "the same as between Ferrari and Usain Bolt; there's no competition".
SEE: How to implement AI and machine learning (ZDNet special report) | Download the report as a PDF (TechRepublic)
While Kasparov was one of the first to be defeated by a computer many more are likely to join him in future. For all this, his outlook on AI is perhaps surprisingly upbeat.
"AI is a tool, it's a technology it's not a harbinger of utopia or dystopia, it's not a magic wand it's not the terminator, it's a tool," he says. "And at the end of the day how you use a tool will determine our future."
Kasparov prefers the term 'augmented intelligence' because he sees that as a more precise way to describe human-machine collaboration, and also that 'artificial' sounds a bit too scary. He says he is an optimist by nature and sees the fear of AI as a psychological obstacle we need to overcome.
"If we are overwhelmed by these dystopian doomsayers the future may be bleak, because I believe the future is a self-fulfilling prophecy but if we have an optimistic outlook if we believe in a bright future there are so many ways that intelligent machines will make us smarter," he says.
"It is all about us finding the right combination; how we can actually apply our very human quality emotions and other things that are uniquely human into this collaboration," he argues. There's a certain inevitability to the impact on jobs, too, he says. Technological progress has done away with perhaps 98 percent of the jobs in the agriculture sector, he notes, and it has wiped out tens of millions if not hundreds of millions of blue-collar manufacturing jobs. With the advent of AI, it's time for the white collar jobs to go next.
"People say 'Oh but it's cognitive'; so what? It's still a rote cognitive task and we have been teaching the last two generations of the educated classes to act like machines so that's why many of these jobs they can easily be done by the machines," he says, pointing to data that suggests that only four percent of work required human creativity and 29 percent any emotional sense.
"Why do we do all this work that prevents us from showing our strengths, our humanity? I think there are many things that machines cannot do." he says.
While that may be true, few countries are prepared for the potentially huge impact on their society of a new wave of technology-enabled mass unemployment that could result, especially one that impacts of a new set of victims this time; the white-collar middle class.
Still, Kasparov argues that AI removing a swathe of jobs may simply spur innovation elsewhere: "AI will push us. AI will start creating this disruption that will force us to start to think about projects we abandoned because they were too risky or couldn't be managed by the current risk standards," he says.
"In 1960s, talented kids wanted to be space engineers. Thirty years later, they all want to be financial engineers, so my hope is that today AI will kill enough jobs in the financial industry for kids to go back to space," he jokes.
Kasparov thinks there are limits to the rise of the machines. While they can ask questions, interrogate and process data, they don't know which questions are relevant.
"Basically we will be dealing with a situation where machines will be dominant in any closed system whereas for humans our task will be actually to find the best way of dividing open-ended space into the closed systems where we can maximise the effective use of machines," he predicts.
"Now we have enough data to say that in any closed system, machines will prevail the moment the target has been identified then you can switch to the computer. How to identify the target is another story. To be more on the philosophical edge I could say that we humans we have purpose, the good thing is we don't know what purpose is so we cannot share the secret with machines."
But doesn't that mean the list of things that humans are good at will continue to get smaller?
"It is shrinking," he concedes, but shrinking doesn't mean it is getting less important. "We will be controlling shrinking territory but the way we can influence the outcome most likely will keep growing. It is up to us -- we are creating very powerful tools which could be used for good and also for bad, it could be used for destruction. Technology is agnostic; AI cannot create war, people can, so that's why we should consider the effect using AI will have."
SEE: Special report: How to automate the enterprise (free ebook)
On stage earlier at the conference, Kasparov blames the "Hollywood brainwashing production" of Terminators and The Matrix and killer robots for our fear of AI. "I don't see any sign of machines threatening humanity yet but doomsayers, they are running the show, it sells," he says. "I see no way of machines automatically transferring knowledge accumulated in closed system to open-ended systems which still remains a human domain."
Kasparov's point about AI only beating humans in a closed system may be right, but increasingly we are converting the world around us into just those closed systems; we're building smart homes and instrumenting our smart cities, carrying devices that monitor our health, all so that AI can understand and make them efficient. The risk is that we create a digital version of the enclosure of the commons, where we are able to measure and optimise everything we do using AI, but lose control of it at the same time.
In some respects worrying about AI destroying the world at some point in the future may also mean we are ignoring more pressing concerns around security. At Web Summit, Kasparov took place in a demonstration by security company Avast, for which he is a security ambassador, of how easy it is to break into an unprotected smart home.
"Every device you buy you are infringing your privacy here and there and if you have too many of them connected the whole security could be in question," he warns, urging consumers to pay more attention to basic digital hygiene. Five minutes reading the fine print could save hours maybe days of trying to limit the damage, he says: "The moment you think you are getting it for free it means you are paying with an invisible currency, which is your privacy."
His advice for staying safe in the connected world could easily be applied to wider societies when they are grappling with AI.
"My experience is most of the threats can be easily avoided if people accept elementary rules that will protect them. Don't push the button without switching on your brains," he says.