Facebook outlines its AI-driven efforts to fight terrorism

After facing criticism from EU leaders following the string of terrorist attacks in the UK, Facebook is stepping up its efforts to curb extremist content online.
Written by Stephanie Condon, Senior Writer

After facing criticism from EU leaders following a string of terrorist attacks in the UK, Facebook on Thursday outlined the ways it's stepping up its efforts to curb extremist content on its social network, including its use of artificial intelligence.

"We agree with those who say that social media should not be a place where terrorists have a voice," wrote Facebook's Monika Bickert, director of global policy management, and Brian Fishman, counterterrorism policy manager, in a blog post. "We want to be very clear how seriously we take this -- keeping our community safe on Facebook is critical to our mission."

Bickert and Fishman said Facebook removes "terrorists and posts that support terrorism" when the company spots them. That said, they added, "We don't want to suggest there is any easy technical fix."

Facebook is using its most "cutting edge techniques" in AI to specifically combat terrorist content about ISISI, al Qaeda, and their affiliates, they said. Those techniques include image matching -- using previously removed terrorist propaganda to identify and prevent the same images or videos from being uploaded to Facebook again.

Facebook is also experimenting with language understanding, analyzing pro-terrorist text that's been removed from the site. "That analysis goes into an algorithm that is in the early stages of learning how to detect similar posts," the blog post said.

The company also uses algorithms to try to identify "clusters" of terrorists, by finding accounts that appear to be associated with or similar to disabled terrorist accounts. It's also updating its methods to detect new fake accounts from repeat offenders, and it's sharing data across its family of apps, which includes Facebook, WhatsApp, and Instagram.

In addition to using AI, Facebook also noted how it's hiring specialists and working with government and industry partners. The company is expanding its community operations team to 3,000 people globally to monitor content. It's also hired more than 150 counter-terrorism specialists, including academic experts on counterterrorism, former prosecutors, former law enforcement agents and analysts and engineers.

Additionally, Facebook is working with other internet companies like Twitter and Youtube to combat extremism. It's also receiving briefings from government agencies across the globe and participating in public-private initiatives such as the Global Coalition Against Daesh. Facebook also supports "counterspeech" programs such as the Online Civil Courage Initiative.

Lastly, Facebook noted that it can't read encrypted messages. However, it said, "we do provide the information we can in response to valid law enforcement requests, consistent with applicable law and our policies."

Related stories:

Editorial standards