Facebook has moved against accounts spreading QAnon-related conspiracy content in another crackdown on inauthentic behavior.
In April's "Coordinated Inauthentic Behavior" report, Facebook said that a total of five pages, 20 Facebook accounts, and 6 groups were taken down for being associated with the QAnon network, made up of individuals that the social media giant says is dedicated to spreading "fringe conspiracy theories."
QAnon sprung up in 2017 on 4Chan, with a forum user dubbed "Q" posting a single conspiracy theory. The post claimed that US President Trump is the world's only hope against the "deep state" -- a global network of the elite responsible for evil.
In the three years following, the QAnon movement has expanded to include misinformation surrounding COVID-19, such as fake cures, the virus being a bioweapon, and an overall downplay of the pandemic, as reported by The Conversation.
Reddit has previously banned QAnon subreddits. The accounts, pages, and groups removed by Facebook all originate from the United States.
See also: Was your Facebook post on the coronavirus deleted? This is why
"We found this activity as part of our internal investigations into suspected coordinated inauthentic behavior ahead of the 2020 election in the US," Facebook says.
In addition to the QAnon removals, 19 pages, 15 Facebook accounts, and one group originating from the same country were also taken down. An investigation by the company revealed connections to VDARE, an anti-immigration website founded by Peter Brimelow, as well as content from The Unz Review.
CNET: Facebook says fake accounts used coronavirus content to attract followers
In response, Brimelow denied the creation of any fake accounts and said the decision was "baffling." In addition, Brimelow said that the money VDARE spent advertising on the network should be refunded. (ZDNet has reached out to Facebook to clarify this matter).
The COVID-19 pandemic has accelerated spam, fraudulent, and fake content moderation across social networks. Facebook is in a constant battle against those seeking to capitalize on fears surrounding the virus and has recently introduced a new system to let users know when they have interacted with fake coronavirus-related posts.
TechRepublic: Bad password habits continue with 53% admitting to using the same password
In March, Facebook also revealed a new tool designed to automate the takedown of fake accounts. The system is based on machine learning (ML) and has been taught to identify fraudulent profiles by analyzing account behavior.
According to the company, the tool has been used to remove 6.6 billion accounts in the past year alone and has also stopped millions of fake account creation attempts.
Facebook's worst privacy scandals and data disasters
Previous and related coverage
Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0