X
Innovation

Facebook shares AI advancements improving content moderation

Improvements to systems like Facebook's Reinforcement Integrity Optimizer (RIO) are helping to drive down the amount of hate speech and other unwanted content that Facebook users see, the company said.
Written by Stephanie Condon, Senior Writer

Facebook on Wednesday shared some of the advancements in ai that are contributing to the company's colossal task of enforcing community standards across its platforms. New techniques and systems that Facebook has quickly moved from research into production, such as its Reinforcement Integrity Optimizer (RIO), are helping to drive down the amount of hate speech and other unwanted content that Facebook users see, the company said. 

"AI is an incredibly fast-moving field, and many of the most important parts of our AI systems today are based on techniques like self-supervision, that seemed like a far off future just years ago," Facebook CTO Mike Schroepfer said to reporters Wednesday. 

In the Community Standards Enforcement Report published Wednesday, Facebook said that the prevalence of hate speech in Q2 2021 declined for the third quarter in a row.  This was due to improvements in proactively detecting hate speech and ranking changes in the Facebook News Feed.

In Q2, there were five views of hate speech for every 10 000 views of content, according to the report. That's down from five to six views per 10 000 views in Q1. 

Meanwhile, the company 31.5 million pieces of hate speech content from Facebook in Q2, compared to 25.2 million in Q1, and 9.8 million from Instagram, up from 6.3 million in Q1. 

Systems like RIO, introduced late last year, help the company proactively detect hate speech. 

The classic approach to training AI uses a fixed data set to train a model that's then deployed to make decisions about new pieces of content. Rio, by contrast, guides an AI model to learn directly from millions of current pieces of content. It constantly evaluates how well it's doing its job, and it learns and adapts to make Facebook platforms safer over time. 

"This kind of end-to-end learning is incredibly valuable for enforcing community standards," Schroepfer said. "Because the nature of the problem is always evolving alongside current events when new problems emerge, our systems need to be able to adapt quickly. Reinforcement Learning is a powerful approach to help AI meet new challenges when there's a shortage of good training data. We expect system our RIO to continue driving down the prevalence of hate speech and other unwanted content long into the future, which is very encouraging for such a new technology."

Facebook also replaces single-purpose bespoke systems with more generalized ones at a "surprising pace," Schroepfer said. He also said they see "impressive improvements" in multi-modal AI models, which can operate in multiple languages, multiple modalities like text, images, and video; and across multiple policy areas. 

Additionally, Schroepfer touted the progress Facebook researchers have made in the areas of "zero shot" and "few shot" learning, which enables AI systems to recognize violating content, even if they've never seen it before or have only seen a few examples of it during training. 

"Zero shot and few shot learning is one of the many cutting edge AI domains where we've been making major research investments, and we expect to see results in the coming year," Schroepfer said. 

Facebook's AI systems complement the work done by tens of thousands of individuals to enforce community standards.

Editorial standards