X
Innovation

Facebook: We'll ban deepfakes but only if they break these rules

Some deepfake videos could remain on Facebook – they just might not be promoted through the News Feed.
Written by Liam Tung, Contributing Writer

Facebook is tightening up controls on deepfakes or AI-manipulated video and photos ahead of the US 2020 presidential election. 

The social network will remove "misleading manipulated media" if it meets to two key criteria. However, according to the Washington Post, the new rules may still allow the type of video that showed House Speaker Nancy Pelosi appearing to speak with a slur last year. 

AI-manipulated video isn't all that common today, but there are fears it could be used to create social discord by depicting politicians and public figures saying things that they did not or falsifying their behavior. 

SEE: How to implement AI and machine learning (ZDNet special report) | Download the report as a PDF (TechRepublic)

Facebook says it will take down a video if it has been "edited or synthesized – beyond adjustments for clarity or quality – in ways that aren't apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say".

The second criteria is if the video or image is "the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic".

However, it will still allow content that is "parody or satire" or video that has been edited only to omit or change the order of words. 

Monika Bickert, vice president of Facebook's global policy management, outlined the rules today ahead of her testimony at a congressional hearing on Wednesday about how to combat "manipulation and deception in the digital age". 

Facebook this May refused to take down the manipulated video of Pelosi, instead minimizing its distribution by not showing it at the top of the News Feed. 

Bickert said content will be removed if it doesn't meet its community standards and videos that don't meet them can still be reviewed by its third-party fact-checkers.  

"If a photo or video is rated false or partly false by a fact-checker, we significantly reduce its distribution in News Feed and reject it if it's being run as an ad. And critically, people who see it, try to share it, or have already shared it, will see warnings alerting them that it's false," said Bickert. 

As far as the 2020 elections go, Facebook has drawn criticism for its decision not to remove political ads containing lies. Facebook CEO Mark Zuckerberg defended the decision on free speech grounds, arguing that a private company should not be censoring politicians or news.   

SEE: Forget email: Scammers use CEO voice 'deepfakes' to con workers into wiring cash

But the company appears to be taking the threat of deepfakes seriously, backing the Deepfake Detection Challenge for which it's generated 100,000 deepfake videos for researchers to use to develop novel detection techniques. 

Bickert notes that videos flagged as false by its fact-checkers aren't necessarily removed but will contain warnings to users that they are false. 

"If we simply removed all manipulated videos flagged by fact-checkers as false, the videos would still be available elsewhere on the internet or social media ecosystem. By leaving them up and labeling them as false, we're providing people with important information and context," wrote Bickert. 

More on deepfakes and security

  • Facebook, Microsoft, AWS: We want you to take up the deepfake detection challenge  
  • War on deepfakes: Amazon backs Microsoft and Facebook with $1m in cloud credits  
  • Deepfakes: For now women, not democracy, are the main victims  
  • Facebook, Microsoft: We'll pay out $10m for tech to spot deepfake videos
  • Forget email: Scammers use CEO voice 'deepfakes' to con workers into wiring cash
  • 'Deepfake' app Zao sparks major privacy concerns in China
  • AI, quantum computing and 5G could make criminals more dangerous than ever, warn police
  • Samsung uses AI to transform photos into talking head videos
  • Facebook's fact-checkers train AI to detect "deep fake" videos
  • The lurking danger of deepfakes TechRepublic
  • These deepfakes of Bill Hader are absolutely terrifying CNET 
  • Editorial standards