X
Innovation

2084: What happens when artificial intelligence meets Big Brother

A new book argues that we shouldn't be scared of the future of AI. AI today is what should worry us.
Written by Daphne Leprince-Ringuet, Contributor

A professor of mathematics at the University of Oxford, doubling as a philosopher of science and religion, John Lennox has some pretty unique insights to put forward when it comes to the future of artificial intelligence. 

The title of his new book, ambitiously named 2084: Artificial Intelligence and the Future of Humanity, certainly suggests a post-Orwellian vision of dystopia, complete with an algorithmic Big Brother and an army of bio-engineered super-humans. And similar predictions have already been made by other influential academics, too. Yuval Noah Harari, in his bestselling book Homo Deus, for example, anticipates that technological developments will lead to humans enhancing themselves with abilities like eternal life.

But far from portraying an Ex-Machina-esque scenario, in which our AI creations would take over the world and fundamentally change human nature, Lennox warns that the dangers of AI are more imminent. 

"If creating an AI that surpasses humans were to happen, of course it would be a threat," Lennox tells ZDNet. "But there are major dangers long before then, and these dangers are actually happening now. I think it is misleading to tell people about the problems that will come in the future – it's what's happening now that demands an ethical and moral response."

SEE: Managing AI and ML in the enterprise 2020 (free PDF)    

Lennox separates the field of AI into two categories: general, and narrow. He describes general artificial intelligence as the attempt to enhance human beings through add-ons, drugs, bio-engineering – and ultimately, to liberate us from our biological bodies, by uploading our minds onto forever-lasting silicon chips.

Narrow artificial intelligence is less glamorous and more real, according to Lennox. It is the technology that already exists in our smartphones, or the program that guides autonomous vehicles, and the code that optimizes Amazon's robotic warehouse pickers. "It's not intelligence at all," says Lennox. "It is simply a computer doing what it's programmed to do."

And yet, Lennox is far more afraid of narrow AI than he is of general AI. Or rather, he is afraid of whose hands narrow AI tools might fall into. He quotes, in his book, C.S. Lewis' 1943 novel The Abolition of Man: "What we call Man's power over Nature turns out to be a power exercised by some men over other men with Nature as its instrument."

In other words, it is not AI itself that will cause trouble, and an apocalyptic take-over of the human species by intelligent robots is not on the cards just yet. An immediate reason to worry, however, has to do with how the technology can be used, and sometimes is already. "AI is not immoral," says Lennox, "it is amoral. It's what you do with it that can be either moral, immoral, or in some cases, neutral."

Lennox takes the often-cited example of facial-recognition technology, and specifically how the tool is used in the Xinjiang province in China, which is home to ten million Uighur people, who are predominantly Muslim. It was recently found that the Chinese government has been tapping facial recognition to implement extensive surveillance of the population, including gate-like scanning systems that record biometric images, as well as smartphone fingerprints – with the goal of keeping track of the Uighur community's movements.

Fingers can equally be pointed to the US and Europe, where the past few years have been punctuated with privacy scandals and examples of big tech leaking personal data to third parties without user consent – another form of surveillance that requires just as much scrutiny, says Lennox.

Of course, it is well-known among experts and policy-makers that AI, even though it is in its early days, needs to be regulated. But Lennox is skeptical that the right context is in place to do so. Quoting Vladimir Putin's statement that whoever controls AI "will become the ruler of the world", Lennox says that the power structures in place do not favor a responsible use of the technology.

"Certainly, the concentration of power centers that we see developing in the world at the moment indicate that there would be a risk if AI were to fall in the hands of a world government," says Lennox.

The race to dominate AI is certainly on. China has already announced that AI is a national priority, while Russia is ramping up research in the Era technopolis for the deployment of AI in the military sphere. The US, for its part, has completed the first year of its American Artificial Intelligence initiative, with the goal of consolidating the country's leadership in the field.

If a large power block were to make huge strides in AI, would it lead to an AI-infused fulfillment of George Orwell's prophecy, 1,000 years later? "My book's title is certainly designed to reflect Orwell's 1984, and it reflects its dystopic character," says Lennox. 

But despite all the potential dangers that the technology could pose, the academic does not believe in calling the AI project off altogether. "As a scientist, that would be absurd," says Lennox, citing the immense benefits that algorithms have brought in fields like health or manufacturing. "Should we abandon electricity because it can sometimes be dangerous? That attitude just doesn't work," he adds.

The way forward, according to the scientist, is to draw international agreements that provide frameworks for an ethical use of AI. Some work has already been done, particularly by the European Commission, which recently released a white paper on artificial intelligence, designed to make the technology a "force for good", and one that won't harm citizens. 

Similarly, the US Department of Defense (DoD) published a 65-page set of ethical guidelines last year on the military use of AI, to keep killer robots in check. The Pentagon details the need to stick with principles such as responsibility or reliability when deploying the technology. 

SEE: Working from home? Your boss has a creepy new way to spy on you

Those efforts are positive developments, recognizes Lennox, but they remain tied to a single party's, or nation's, worldview. What happens when this worldview contradicts that of another major power block remains to be seen.

"There are different worldviews competing in the marketplace," says Lennox. "A lot depends on which worldview is having the most influence in a particular moment. Drawing lines is good, but in ethics you are always up against the question: Who said so, and why should I do that?"

If all countries had equal weight on the international scene, implementing ethical guidelines would be more realistic. But that is far from being the case. And if one powerful block was to sideline the ethical use of AI, there is not much that could be done to hold them accountable.

From the current state of things, it is hard to anticipate what will come next, says Lennox. The scientist recommends staying anxious about some of AI's potential, while also welcoming its positive impact in certain fields.

"But my predominant impression is that there are a lot of negative outcomes associated with AI, and that there is huge risk if we go too far," says Lennox. And with 2084 not that far off, we might find out soon enough. 

Editorial standards