Researchers: Twitter bots only account for 4% of anti-vaccine content exposed to users

It's the humans who are spreading fake news, not the bots.
Written by Campbell Kwan, Contributor
Image: Kon Karampelas

Twitter bots play little to no role in shaping the vaccine discourse among Twitter users in the United States, according to a study published by researchers from the University of Sydney.

Less than 4% of anti-vaccine misinformation that is exposed to Twitter users come from bots, with the remainder coming from human-to-human interactions, the study found.

The study examined the Twitter activity of over 53,000 randomly selected users based in the United States and monitored their interactions with vaccine-related tweets from 2017 to 2019. These users were distributed across the United States with the most common user locations being California, New York, and Texas, which accounted for 12.3%, 9.2%, and 9.1% of the selected Twitter users, respectively. 

Combing through 20 million vaccine-related tweets, the researchers found that for most users, exposure to anti-vaccine content was relatively infrequent and exposure to bots that posted such content was even more infrequent. During the study's two-year period, a typical user, on average, was exposed to 757 vaccine-related tweets, of which 27 included vaccine-critical content, and none were from bots.

Meanwhile, the results indicated that 36.7% of users posted or retweeted vaccine content. By comparison, only 4.5% of users retweeted an anti-vaccine tweet, with only 2.1% of users retweeting such content from a bot.
The key difference between this study and what has been done in the past, University of Sydney researcher Adam Dunn told ZDNet, is that it measures what people are looking at rather than just counting up what Twitter users are posting. 

Other studies, like one performed by Carnegie Mellon University earlier this year, which found that almost half of the 200 million tweets posted about coronavirus from January to June were from bots, only focused on the amount of content created by bots.

Rather than counting the number of posts created by Twitter bots, the researchers of this new study measured how much exposure and engagement users had with vaccine-related tweets from bots. 

The researchers counted a vaccine-related tweet or retweet posted by an account that a user was following as a potential exposure, while engagement was measured by identifying vaccine-related tweets that were retweeted by users. 

"That's a fundamental difference and it's a really important difference because I think it's potentially had an impact on the policies and the kind of rules that social media platforms have spent their time on," Dunn said. 

The study also found that for Twitter users embedded in communities that engaged more with the topic of vaccination in general, which accounted for 5.8% of the selected users, the vast majority never engaged with vaccine-related posts from bots. 

Instead, they engaged with vaccine-critical content posted by other human users in their communities. The percentage of users from this subgroup that retweeted bots at least once was less than 10% during the study's two-year period.

Given the low impact that Twitter bots have on vaccine discourse in the United States, the researchers believe that allocating resources to eliminating bots may be less effective than providing tools to improve media literacy and developing personalised communications interventions targeted at communities where vaccine-critical content is most common.

On the topic of vaccines, Facebook has taken various steps to make anti-vaccination misinformation harder to find while elevating authoritative information about vaccines. The social media giant has not removed the groups altogether, however.

According to Dunn, these measures taken by Facebook are not effective.

"A lot of the things that social media platforms are doing are trying to get rid of bots or downrank things that are posted by the wrong people, but that's fundamentally the wrong approach. What they should be doing is helping to educate social media users so that they can be protected against misinformation and stop themselves from passing it on." 

"The people that are posting health misinformation online, they aren't trolls living under bridges trying to eat goats or hobbits; they're just people. 

"It's often not coordinated, sophisticated espionage from foreign governments -- it's just that it's convenient for us to try and find others to blame for why people aren't vaccinating their kids, why they aren't wearing masks, and why they're drinking bleach."

Meanwhile, since the middle of this year, Twitter has introduced labels to accompany misleading, disputed, or unverified tweets about vaccine misinformation and the coronavirus in a bid to crunch down on the spread of harmful and false information about the global crisis. 

The company also began to label tweets as misleading if they provided inaccurate information about voting and the electoral process. Twitter used this new feature to put warning labels on several of Donald Trumps tweets throughout the summer and the early autumn.  

Similarly, Facebook has implemented measures to tackle the spread of COVID-19 misinformation by alerting users when they have interacted with fake or dangerous content. 

These measures are not extended to anti-vaccination misinformation, however.

The company has also been hesitant to fact check political advertising "because we believe it's important for the debate to play out".

"We don't think it's right that we should be the arbiters of truth," Facebook representatives told an Australian House of Representatives Committee last month.

Related Coverage

Editorial standards