X
Innovation

YouTube's algorithm is still recommending videos that you wish you hadn't seen, say researchers

Mozilla led a 10-month investigation to try and uncover the workings of YouTube's recommendation algorithm - and found it is time for the platform to provide full transparency.
Written by Daphne Leprince-Ringuet, Contributor

YouTube's algorithm is recommending videos that viewers wish afterwards that they hadn't seen, according to research carried out by Mozilla. And at times, found the report, the algorithm even encourages users to watch videos that are later found to have violated the website's content policies. 

Last year, Mozilla launched RegretsReporter, an open-source browser extension that lets users report videos that they were recommended and which they wish they hadn't ended up watching. 

When filing a report, users are asked to provide the video's title, description, view count and entry point (whether by direct search or through recommended content); and they can also provide Mozilla with a "trail" of how they arrived at the reported video. 

SEE: An IT pro's guide to robotic process automation (free PDF) (TechRepublic)

Ten months after RegretsReporter launched, 37,380 volunteers have downloaded the extension, who together shared regrets about 3,362 videos. A team of 41 research assistants then went through the reported videos to try and establish links between harmful content and recommendation patterns. 

They found that an overwhelming 71% of 'regret' reports came from videos recommended by the algorithm. What's more: recommended videos were 40% more likely to be reported than videos that had been searched for.  

In almost half of the cases, the content that the algorithm pushed for appeared to be completely unrelated to previous watches. One volunteer, for example, reported being encouraged to watch extreme right-wing channels after looking up wilderness survival videos. 

The algorithm occasionally recommended posts that violate YouTube's own policies. Up to 200 videos that were reported in the extension have since been removed – but only after amassing a collective 160 million views.  

Back in 2019, Mozilla asked YouTube for details of how the platform's recommendation algorithm works. But upon asking the social media platform to reveal the model's details, the organization was met with firm resistance. And despite asking time and time again, Mozilla's researchers have still been unable to access the inner workings of the technology, which lead to the RegretsReport project.

"YouTube tells us the view count on a video, but not how many times they have recommended that video," Brandi Geurkink, Mozilla's senior manager of advocacy, tells ZDNet. "So it's very difficult to understand the role that the algorithm is playing in the bad experiences that people are having."  

"This is absolutely where we should be placing concern and emphasis – it's not only about how much harmful content there is on the platform, but also about taking responsibility for the role that their tools might be playing in amplifying it." 

Of course, RegretsReporter has its limitations. There is no way of preventing users from actively seeking out harmful videos to skew the data, for example – nor is it possible to claim that a pool of a few tens of thousands of watchers is representative of the platform's entire userbase. 

"We try to make it really clear that tools like this are not a substitute for transparency from YouTube," says Geurkink. "What we do is expose some trends that we are seeing that we think are the tip of the iceberg, but being able to really understand how these experiences might be happening at the scale of YouTube would require transparency from YouTube, which is what we think needs to happen." 

If anything, argues Geurkink, the flaws in the methodology only highlight the depth of the problem: without appropriate transparency from the platform, researchers must resort to alternative, less reliable methods to try and shed a light on the root cause of the issue. 

SEE: GDPR: Fines increased by 40% last year, and they're about to get a lot bigger

YouTube, for its part, has acknowledged that the platform's algorithm needs a fix. The company promised to make amendments and recently launched over 30 different policy changes to reduce recommendations of borderline videos, which it claims have caused an average 70% drop in watch time for this type of content

A YouTube spokesperson told ZDNet: "Over 80 billion pieces of information is used to help inform our systems, including survey responses from viewers on what they want to watch. We constantly work to improve the experience on YouTube and over the past year alone, we've launched over 30 different changes to reduce recommendations of harmful content. Thanks to this change, consumption of borderline content that comes from our recommendations is now significantly below 1%."

According to the platform, user surveys show that in general, watchers are satisfied with YouTube's recommendations. 

For every 10,000 views on YouTube, found the firm, an average 16 to 18 come from content that violates community guidelines, which is also down 70% compared to the same quarter of 2017, and comes down to larger investments in machine learning.

For Geurkink, that's not enough. "We've pushed them to actually allow for independent verification of those claims," she says. "I don't think that we should just take their claims at face value. We're pushing for auditing of the algorithm, for independent verification of those numbers, and they have not taken any steps to do that." 

But the tide might be turning. During a US Senate Committee hearing earlier this year, Senator Chris Coons quizzed YouTube at length over the nebulous workings of the platform's recommendation algorithm; previously, in the UK, MP Yvette Cooper also referred to YouTube specifically when raising the issue before Parliament.  

In other words, public pressure is mounting, and it might be that one day, platforms like YouTube are required to open their algorithms to external scrutiny. At least, that's what Geurkink and Mozilla are hoping for: the report, while calling once more on YouTube to share more information, also urges governments to enact laws that would mandate AI system transparency. 

Editorial standards