X
Business

Facebook steps up work in AI to systematically tackle its problems

At F8, Facebook executives showcased the state-of-the-art systems they’re using to address everything from exclusionary algorithms to bad actors posting hard-to-spot content.
Written by Stephanie Condon, Senior Writer

Facebook has no shortage of problems. From election manipulation and hate speech to exclusionary algorithms, the social media giant is constantly putting out fires. On Day 2 of the company's F8 conference in San Jose, California, Facebook executives made the case that advances in AI will help the company reign in all of the toxic content and harmful consequences of its platforms.

"There aren't simple answers to these complex problems," CTO Mike Schroepfer said in the Day 2 keynote address. "When you stare down some of these problems… the magnitude, the breadth, the real-world impact -- it's really easy to lose hope, to want to pack up."

Also: Facebook's PyTorch 1.1 does the heavy lifting for increasingly gigantic neural networks

He argued, however, that Facebook's goal to "bring a better future to people through technology" is worth pursuing. He laid out the technological and operational strategies Facebook is using to combat its problems.

For instance, Facebook is using state-of-the-art technology to identify problematic content, even as bad actors do their best to evade detection, he said. Using techniques like "nearest neighbor" search, Facebook has been able to quickly spot unwanted content – such as an ad for marijuana that includes seemingly innocuous images and coded language. Meanwhile, with advances in self-supervised learning, Schroepfer said Facebook has been able to scale its content moderation.

"This is an intensely adversarial game," he said, noting that there are billions of pieces of problematic content a month.

Schroepfer also highlighted the way Facebook used AI to build inclusivity, security and privacy into its video chat device, the Portal. The Portal team made sure it trained Portal's algorithms on a broad data set, so it would recognize all individuals on screen. Meanwhile, they took extra time to build computer vision capabilities that run on the device. Facebook "could've shipped a lot sooner by running these processes in the cloud," he said, but "we knew that wouldn't accomplish what we wanted in terms of privacy and security."


Must read

  • Facebook's Mark Zuckerberg: "The future is private" 
  • Facebook starts remodeling Messenger into the world's "digital living room" 
  • More than 40M businesses now on Messenger,
  •  Facebook says at F8 Facebook's Zuckerberg preaches privacy, but his delivery makes it hard to even ponder believing

  • Facebook's approach doesn't just rely on advanced algorithms but also on teams of internal and external experts who can oversee product development and operations. "It starts by making sure with every product we build… we have embedded teams focused on all the ways those products may cause harm," Schroepfer explained.

    This is a significant challenge, given that Facebook's many products all have different features with their own sets of problems. For instance, Groups don't exist on Instagram, so the problem of how much authority to give moderators is not an issue. Meanwhile, VR creates the challenge of ensuring that users have a sense of physical safety and comfort.

    These product teams have the support of centralized experts in various areas including security, algorithmic fairness, misinformation, inclusion and accessibility. They also engage with external experts. For instance, groups addressing suicide prevention have consulted with the National Suicide Prevention Lifeline and Save.org.

    Facebook's worst privacy scandals and data disasters

    Editorial standards