X
Business

Facebook says AI enhancements have bolstered its content moderation efforts

With technology improvements, 22.5 million pieces of content were removed from Facebook for community standards violations in Q2.
Written by Natalie Gagliordi, Contributor

Facebook said it has made significant improvements to the technology it uses to detect and remove hate speech and other violations of its community standards, including misleading information related to COVID-19.

Facebook has historically relied on human moderators, employed by contract partners, as the basis for its content moderation strategy. However, these moderators are generally unable to work from home due to the graphic nature of their work. 

In turn, the social media company was faced with a potential backlog of content moderation as the COVID-19 pandemic forced offices to remain closed or at reduced capacity.

Facebook's VP of integrity Guy Rosen said the company is now relying more heavily on technology to help manage the moderation workload. Facebook is using AI to create a ranking system that prioritizes the most critical content for human moderation teams to review. The AI evaluates how severe the threat in a piece of content might be -- for instance, a post that promotes child exploitation or has someone imminently implying they are taking their life -- and flags it for immediate review.  

"If you step back and think of how technology helps with content moderation ... the AI helps to ensure that with a reduced moderator workforce, we can still focus on the most severe and critical categories that require review and action," Rosen said. 

With technology improvements, Rosen said the proactive detection rate for hate speech on Facebook increased to 95%, resulting in 22.5 million pieces of content being removed for violations in Q2. On Instagram, proactive detection resulted in the removal of 3.3 million posts.

Facebook also made improvements to its automation capabilities to help detect and remove content violations in English, Spanish and Burmese. 

Meanwhile, from April through June, Facebook said it removed over 7 million pieces of harmful or misleading COVID-19 information from Facebook and Instagram. These include posts that push fake preventative measures or exaggerated cures that the CDC and other health experts have deemed dangerous. 

For other misinformation, Facebook said it's working with independent fact-checkers to display warning labels. From April through June, warning labels were placed on about 98 million pieces of COVID-19 misinformation on Facebook.

RELATED:

Editorial standards