Facebook tries to make it harder to find an anti-vax group

But it's just preventing pages that spread misinformation from showing up in its search function, rather than removing them.
Written by Asha Barbaschow, Contributor

Facebook has used its submission to the Australian Select Committee on Foreign Interference through Social Media to outline the steps it has taken to stop the spread of misinformation, or at least highlight when something might be a bit on the nose.

As the submission [PDF] highlights, pre-pandemic, Facebook was faced with the dilemma of providing people with freedom of speech at the expense of allowing misinformation to spread. This was exemplified when false coronavirus "advice" spread like wildfire.

Must read: Facebook comments manifest into real world as neo-luddites torch 5G towers

"Since the very beginning of the crisis, we have been displaying on Facebook and Instagram prompts to direct users to official sources of information, including from the Australian government and the World Health Organization (WHO)," Facebook wrote.

"These have been seen by every Facebook and Instagram user in Australia multiple times, either in their feeds or when they search for coronavirus-related terms."

While it previously launched its own Coronavirus Information Centre and points users to the WHO or government health sites, Facebook has also started showing messages about COVID-19 misinformation on the News Feed to people who have liked, reacted, or commented on this type of harmful content.

"These messages will connect people to COVID-19 myths debunked by the WHO, including ones we've removed from our platform for leading to imminent physical harm," the social media giant wrote.

Facebook has also made "significant" donations of free advertising credits on its services to the Australian government and state governments.

It's also started rolling out a new notification to give people more context about COVID-19 related links when they are about to share them.

On the topic of vaccines, Facebook said it has been taking a range of steps to make anti-vaccination misinformation harder to find and elevate authoritative information about vaccines.

This includes removing groups and pages that spread vaccine misinformation from recommendations or predictions when a user types the words into the search bar; rejecting ads and fundraisers that include anti-vaccination misinformation; and inserting authoritative notices at the top of groups and pages that are discussing anti-vax misinformation, directing people to authoritative sources.

But the social media giant isn't removing the groups altogether, however.

See also: Facebook pulls video from Trump's page labelling it as COVID-19 misinformation

Providing more context around messages that are forwarded multiple times, Facebook said it has seen an increase in the amount of forwarding, which can contribute to the spread of misinformation.

In April, Facebook added new labels to indicate when a message on WhatsApp has been forwarded many times already. It also introduced a limit so a highly-forwarded message can only be sent to one chat at a time.

"This resulted in a 70% reduction in the number of highly forwarded messages on WhatsApp," Facebook said.

This month, it implemented similar messaging forwarding limits in Messenger.

Alongside Google, the pair will also be piloting a "magnifying glass" icon next to highly-forwarded messages on WhatsApp for users to verify the truthfulness of the content.

As the submission was provided in an Australian context, the company touched on the work it undertook with the federal government's Digital Transformation Agency, Atlassian, and service provider Turn.io to bring the Australian coronavirus WhatsAppchat capability to life.

"Across the globe, chatbots such as the Australian government chatbot and the fact-checking Chabot on WhatsApp have sent hundreds of millions of messages directly to people with official information and advice," it said.

Facebook also partnered with the Poynter Institute's International Fact-Checking Network in May to launch a fact-checking chatbot on WhatsApp. Similarly, it joined forces with the WHO in March to launch a WhatsApp chatbot, expanding that as an alert service powered by Messenger.

Within days of the recent artificial intelligence upgrades, the WHO Health Alert service saw over 500,000 messages sent through Messenger and data on specific countries was requested more than 430,000 times. To date, the WHO Health Alert has received almost 4 million messages from over 540,000 users worldwide.


Editorial standards