X
Business

Ex-Google engineer: Extreme content? No, it's algorithms that radicalize people

To stop radicalization, the real solution is to make online platforms' recommendation algorithms more transparent.
Written by Daphne Leprince-Ringuet, Contributor

In the age of fake news and radicalization, the real enemy is not content itself. It's the algorithm pushing that content up to the top of users' 'recommended' lists. 

That's according to software engineer Guillaume Chaslot. He should know: he used to work on the engineering team for ads at Google – who owns YouTube.  

"We need to understand the difference between freedom of speech and freedom of reach. You're free to say whatever you want to say – but there shouldn't be freedom to amplify this," Chaslot told a conference ahead of the Mozilla Festival weekend in London.

He added that extreme content in itself is not problematic. He is actually in favor of having as much content as possible on current platforms.

SEE: How to implement AI and machine learning (ZDNet special report) | Download the report as a PDF (TechRepublic)

Google's YouTube has come in for criticism for its poor management of potentially harmful content. As a result, it has recently made the removal of videos that violate its policy its number one priority.

At the same time, the platform has to perform a delicate balancing act between content moderation and freedom of speech. In a quarterly letter to YouTubers, CEO Susan Wojcicki wrote: "A commitment to openness is not easy. It sometimes means leaving up content that is outside the mainstream, controversial or even offensive."

The debate about content moderation is not new. But for Chaslot, the issue now is that companies not only publish content, but apply algorithms to it. 

"Algorithms are built to boost watch time, and that typically happens through viewing increasingly radical videos," he told ZDNet. 

"Someone could be completely radicalized through viewing hours of YouTube videos on end – and from the perspective of the algorithm, that's actually jackpot." 

When he worked at Google, he said, he raised this issue and suggested including more diverse videos in the platform's recommendation algorithm.

He was met with skepticism from management, and when he left the company started digging to find out where exactly the algorithm would lead him.

This coincided with the 2016 presidential election in the USA, and his research suggested that YouTube's algorithm was pushing users to watch more radical videos.

His results were published last year, with the disclaimer that they could only be partial, since the company withholds from the public any data about which content its algorithm promotes.

Chaslot also created Algotransparency, a website that attempts to simulate YouTube's algorithm to find out which videos are most likely to be promoted when it is fed certain common terms.

"We don't know how much YouTube promotes radical ideas like terrorism," said Chaslot. "They are doing better but in the course of history, we have no idea, and we will probably never know."

A YouTube spokesperson told ZDNet that Chaslot misrepresented his role at Google; and that the company strongly disagrees with the methodology, data and, "most importantly", the conclusions of AlgoTransparency's research. 

"We've designed our systems to help ensure that content from more authoritative sources is surfaced prominently in search results and watch-next recommendations in certain contexts, including when a viewer is watching news related content on YouTube."

YouTube has ramped up efforts to change its recommendation algorithm. This year, it launched a trial in the UK to reduce the spread of what it calls "borderline content" after a similar trial in the US halved the views of such content from recommendation, according to the company.

According to the company, this showed a shift from a system built to optimize watch-time, to one focusing on how satisfied users are with their time spent on the platform.

SEE: Massive wave of account hijacks hits YouTube creators

But this is not enough, according to Chaslot. He added that it is now necessary to create efficient legislation to tackle the issue.

"It is similar to when we realized that tobacco was killing people," he said. "First, we needed the scientific evidence showing that tobacco is harmful – and now, we need the scientific evidence that YouTube is promoting extremism."

Only once this evidence is produced can there be growing public awareness of the issue, before legislation is introduced, he said. "We made rules to stop people from smoking in public places, not from smoking altogether," he pointed out. "Something similar should be done with content."

But with current laws, nothing forces online platforms to share the data that would enable scientific research in the first place. 

As a result, the world is governed by secret algorithms that decide on 70% of what viewers see on YouTube, and 100% of what they read on Facebook, he argued. Euphemistically, Chaslot described this as "a bit crazy".

Editorial standards