X
Tech

AI and disinformation join nukes in the race for armageddon

Huawei's Ren Zhengfei downplays China's supposed lead in AI, but others talk up fears of a disinformation war -- including the Bulletin of the Atomic Scientists and their Doomsday Clock.
Written by Stilgherrian , Contributor

Some analysts say China is surging ahead with the development of artificial intelligence (AI), and perhaps even dominating the field. Huawei's founder Ren Zhengfei would beg to differ.

The US government has yet to figure out the implications of AI, but China hasn't even started thinking about it, Ren told the World Economic Forum in Davos last week.

China's problem, he said, is its education system.

"If you look at the education system in China, it's pretty much the same system designed for the industrial age, designed to develop engineers," Ren said through an interpreter.

"Therefore, I think AI cannot grow very rapidly in China. AI requires a lot of mathematicians, requires a lot of supercomputers, and requires a lot of a super-connectivity and super-storage in those areas."

Both China and the US need to invest more in basic education and basic research, he said.

Ren also played down fears of an AI-dominated future, explaining that companies like Huawei are developing so-called weak AI. He cited examples such as autonomous driving, uncrewed remote mining operations, and biomedicine.

"AI more importantly can be used for production, for efficiency gains, for wealth creation," he said.

"As long as there is more total wealth, governments have the means to have more balanced wealth distribution to balance out the social problems."

As for seeing AI as part of an arms race, Ren compared this to Cold War fears about nuclear weapons.

"If we will look at history from a distance, we see an enormous benefit from atomic energy, and radiation applications in medicine and others, that brought about enormous benefits to humanity," he said.

"Today, we're seeing fears about artificial intelligence, but we should not over-exaggerate. The explosion of atom bombs would hurt people, but people can manage that, and AI is not as damaging as atom bombs, right?"

But what about combining AI with lethal weapons, including nuclear weapons?

The threat of information warfare

Last week the Bulletin of the Atomic Scientists (BAS) moved their famous Doomsday Clock to just 100 seconds to midnight. That's the closest it's ever been since the clock was created in 1947.

As well as the risk of nuclear war, the BAS now cites the "insufficient response to an increasingly threatened climate" and the "increased threat of information warfare and other disruptive technologies" as threats to humanity's very existence.

"In recent years, national leaders have increasingly dismissed information with which they do not agree as fake news, promulgating their own untruths, exaggerations, and misrepresentations in response. Unfortunately, this trend accelerated in 2019," BAS wrote.

"Leaders claimed their lies to be truth, calling into question the integrity of, and creating public distrust in, national institutions that have historically provided societal stability and cohesion."

While nation-states have always used propaganda, BAS wrote that "the internet provides widespread, inexpensive access to worldwide audiences".

"The recent emergence of so-called 'deepfakes' -- audio and video recordings that are essentially undetectable as false -- threatens to further undermine the ability of citizens and decision makers to separate truth from fiction," they said.

"The resulting falsehoods hold the potential to create economic, social, and military chaos, increasing the possibility of misunderstandings or provocations that could lead to war."

Given AI's "known shortcomings", BAS said it's "crucial" that nuclear command and control systems continue to have human decision-makers.

BAS also wrote that biological engineering, hypersonic weapons, and space weapons, "present further opportunities for disruption".

"The computerised and increasingly AI-assisted nature of militaries, the sophistication of their weapons, and the new, more aggressive military doctrines asserted by the most heavily armed countries could result in global catastrophe."

Data colonialism and using AI to hack people's minds

Back in Davos, Professor Yuval Noah Harari from the Hebrew University of Jerusalem warned of the potential dangers of lethal autonomous weapons (LAWs), and also what he called "hacking human beings".

"When you gather enough data on people, and you have enough computing power, [you] get to know people better than they know themselves," Harari said.

What happens when "Huawei or Facebook or the government or whoever" have enough data that they can "systematically hack millions of people"?

"They know more about me than I know about myself, about my medical condition, about my mental weaknesses, about my life history," Harari said.

"Once you reach that point, the implication is that they they can predict and manipulate my decisions better than me. Not perfect. It's impossible to predict anything perfectly. They just have to do it better than me."

Harari compared the emergence of commercial and nation-state data surveillance to the age of European imperialism of the 1800s and earlier, when there was no boundary between commercial imperialism, and military or political imperialism.

"Just imagine the situation 20 years from now, when somebody, maybe in Beijing, maybe in Washington or San Francisco, knows the entire personal medical [or] sexual history of every politician, judge, and journalist in Brazil or in Egypt," he said.

"It's not weapons, it's not soldiers, it's not tanks... They know their mental weaknesses, they know something they did when they were in college when they were 20, they know all that. Is it still an independent country? Or is it a data colony?"

Education may not be the solution to the deepfake problem

National security academics have been concerned about deepfakes for some time.

In 2018, for example, Professor Bobby Chesney from the University of Texas School of Law and Professor Danielle Citron from Boston University Law School published a paper titled Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security.

"The marketplace of ideas already suffers from truth decay as our networked information environment interacts in toxic ways with our cognitive biases," they wrote.

"Deep fakes will exacerbate this problem significantly. Individuals and businesses will face novel forms of exploitation, intimidation, and personal sabotage. The risks to our democracy and to national security are profound as well."

As Citron explained on a recent edition of The Lawfare Podcast, a deepfake released at a "decisional choke point" could make a big difference, especially at a "very sensitive moment in time".

"If a deep fake is released, at just the right moment, whether it's two days before an election, the night before an initial public offering, the deep fake then can tip how people behave," she said.

"Those decisions in some sense are irreparable. You can't turn back an election. You can't turn back an IPO."

It's sometimes suggested that part of the solution would be educating people to be more critical consumers of media, but Chesney isn't sure that would help.

"We haven't yet figured out how to actually move people in that direction through any level of education," he said.

Another problem is that people would become "inherently skeptical" about what they're seeing or hearing.

"People who are liars, people who want to deny something that they really did say or do, will take advantage of that by saying not 'Fake news' but 'That's deepfake news'," Chesney said.

"I think we're already beginning to see indications of this. It's a real dilemma."

Related Coverage

2020 is when cybersecurity gets even weirder, so get ready

AI-powered deepfakes, ransomware, IoT, and 5G all mean that protecting your data is about to get a lot harder.

AI accurately predicts Billie Eilish 'bad guy' Grammy win (TechRepublic)

Billie Eilish wasn't the only big winner at the 62nd Grammys this year. Artificial intelligence scored as well, both on the red carpet and by predicting the winner of Record of the Year.

Canberra to put social media fake news under microscope

It comes as Labor is fed up with Facebook's lack of action to filter out fake news.

The Doomsday Clock just moved closer to midnight again. Tech is getting some of the blame.

Information warfare, deep fakes and AI are all adding to the risk of catastrophe, scientists warn.

Lawmakers to Facebook: Your war on deepfakes just doesn't cut it

Facebook faces scrutiny as Democratic lawmakers fear new misinformation campaigns in the 2020 election.

Editorial standards