Defending democracy in a post-truth world filled with AI, VR and deepfakes

Samuel Woolley's The Reality Game documents an online world awash with alternative facts, deepfakes and other digitally disseminated disinformation, and explores how to limit the damage in the future.

the-reality-game-book-main.jpg

The Reality Game: How the next wave of technology will break the truth and what we can do about it • By Samuel Woolley • Endeavour • 242 pages • ISBN 978-1-91306-812-7 • £16.99

The 1986 Spycatcher trial, in which the UK government attempted to ban ex-MI5 officer Peter Wright's inconveniently revelatory book, was notable for the phrase "economical with the truth", which was uttered under cross-examination by Cabinet Secretary Robert Armstrong. Today, governments, political parties and other would-be opinion-formers regard veracity as an even more malleable concept: welcome to the post-truth world of alternative facts, deepfakes and other digitally disseminated disinformation.

This is the territory explored by Samuel Woolley, an assistant professor in the school of journalism at the University of Texas, in The Reality Game. Woolley uses the term 'computational propaganda' for his research field, and argues that "The next wave of technology will enable more potent ways of attacking reality than ever". He emphasises the point by quoting 70s Canadian rockers Bachman-Turner Overdrive: "You ain't seen nothing yet".

Woolley stresses that humans are still the key factor: a bot, a VR app, a convincing digital assistant -- whatever the tool may be -- can either control or liberate channels of communication, depending on "who is behind the digital wheel". Tools are not sentient, he points out, (not yet, anyway) and there's always a person behind a Twitter bot or a VR game. Creators of social media websites may have intended to connect people and advance democracy, as well as make money: but it turns out "they could also be used to control people, to harass them, and to silence them".

By writing The Reality Game, Woolley wants to empower people: "The more we learn about computational propaganda and its elements, from false news to political trolling, the more we can do to stop it taking hold," he says. Shining a light on today's "propagandists, criminals and con artists", can undermine their capacity to deceive.

With that, Woolley takes a tour of the past, present and future of digital truth-breaking, tracing its roots from a 2010 Massachusetts Senate special election, through anti-democratic Twitter botnets during the 2010-11 Arab Spring, misinformation campaigns in Ukraine during the 2014 Euromaidan revolution, the Syrian Electronic Army, Russian interference in the 2016 US Presidential election, the 2016 Brexit campaign, to the upcoming 2020 US Presidential election. He also notes examples where online activity -- such as rumours about Myanmar's muslim Rohingya community spread on Facebook, and WhatsApp disinformation campaigns in India -- have led directly to offline violence.                                                                            

Early on in his research, Woolley realised the power of astroturfing -- "falsely generated political organizing, with corporate or other powerful sponsors, that is intended to look like real community-based (grassroots) activism". This is a symptom of the failure of tech companies to take responsibility for the issues that arise "at the intersection of the technologies they produce and the societies they inhabit". For although the likes of Facebook and Twitter don't generate the news, "their algorithms and employees certainly limit and control the kinds of news that over two billion people see and consume daily".

Smoke and mirrors

In the chapter entitled 'From Critical Thinking to Conspiracy Theory', Woolley argues that we must demand access to high-quality news "and figure out a way to get rid of all the junk content and noise". No surprise that Cambridge Analytica gets a mention here, for making the public aware of 'fake news' and using "the language of data science and the smoke and mirrors of social media algorithms to disinform the global public". More pithily, he contends that "They [groups like Cambridge Analytica] have used 'data', broadly speaking, to give bullshit the illusion of credibility".

Who is to blame for the parlous situation we find ourselves in? Woolley points the finger in several directions: multibillion-dollar corporations who built "products without brakes"; feckless governments who "ignored the rise of digital deception"; special interest groups who "built and launched online disinformation campaigns for profit" and technology investors who "gave money to young entrepreneurs without considering what these start-ups were trying to build or whether it could be used to break the truth".

The middle part of the book explores how three emerging technologies -- artificial intelligence, fake video and extended reality -- may influence computational propaganda.

AI is a double-edged sword, as it can theoretically be used both to detect and filter out disinformation, and to distribute it convincingly. The latter is a looming problem, Woolley argues: "How long will it be before political bots are actually the 'intelligent' actors that some thought swayed the 2016 US election rather than the blunt instruments of control that were actually used?" If AI is to be used to 'fight fire with fire', then it looks as though we're in for a technological arms race. But again, Woolley stresses his people-centred focus: "Propaganda is a human invention, and it's as old as society. This is why I've always focused my work on the people who make and build the technology."

Deepfake video -- an AI-driven image manipulation technique first seen in the porn industry -- is a fast-developing issue, although Woolley gives several examples where undoctored video can be edited to give a misleading impression (a practice seen during the recent 2019 general election in the UK). Video is particularly dangerous in the hands of fakers and unscrupulous editors because the brain processes images much faster than text, although the widely-quoted (including by Woolley) 60,000-times-faster figure has been questioned. To detect deepfakes, researchers are examining 'tells' such as subjects' blinking rates (which are unnaturally low in faked video) and other hallmarks of skulduggery. Blockchain may also have a role to play, Woolley reports, by logging original clips and revealing if they have subsequently been tampered with.

As a relatively new technology, extended reality or XR (an umbrella term covering virtual, augmented and mixed reality) currently offers more examples of positive and democratic uses than negative and manipulative ones, Woolley says. But the flip-side -- as explored in the dystopian TV series Black Mirror, for example -- will inevitably emerge. And XR, because of the degree of immersion, could be the most persuasive medium of all. Copyright and free speech laws currently offer little guidance on cases like a virtual celebrity "attending a racist march or making hateful remarks", says Woolley, who concludes that, for now, "Humans, most likely assisted by smart automation, will have to play a moderating role in stemming the flow of problematic or false content on VR".

A daunting task

The upshot of all these developments is that "The age of real-looking, -sounding, and -seeming AI tools is approaching...and it will challenge the foundations of trust and the truth". This is the theme of Woolley's penultimate chapter, entitled 'Building Technology in the Human Image'. The danger is, of course, that "The more human a piece of software or hardware is, the more potential it has to mimic, persuade and influence" -- especially if such systems are "not transparently presented as being automated".

SEE: How to implement AI and machine learning (ZDNet special report) | Download the report as a PDF (TechRepublic)

The final chapter looks for solutions to the problems posed by online disinformation and political manipulation -- something Woolley admits is a daunting task, given the size of the digital information landscape and the growth rate of the internet. Short-term tool- or technology-based solutions may work for a while, but are "oriented towards curing dysfunction rather than preventing it," Woolley says. In the medium and long term "we need better active defense measures as well as systematic (and transparent) overhauls of social media platforms rather than piecemeal tweaks". The longest-term solutions to the problems of computational propaganda, Woolley suggests, are analog and offline: "We have to invest in society and work to repair damage between groups".

The Reality Game is a detailed yet accessible examination of digital propaganda, with copious historical examples interspersed with imagined future scenarios. It would be easy to be gloomy about the prospects for democracy, but Woolley remains cautiously optimistic. "The truth is not broken yet," he says. "But the next wave of technology will break the truth if we do not act."

RECENT AND RELATED CONTENT

Twitter: We'll kill deepfakes but only if they're harmful

Facebook: We'll ban deepfakes but only if they break these rules

Lawmakers to Facebook: Your war on deepfakes just doesn't cut it

Forget email: Scammers use CEO voice 'deepfakes' to con workers into wiring cash

'Deepfake' app Zao sparks major privacy concerns in China

California takes on deepfakes in porn and politics

Deepfakes: For now women, not democracy, are the main victims

Read more book reviews