Lawmakers to Facebook: Your war on deepfakes just doesn't cut it

Facebook faces scrutiny as Democratic lawmakers fear new misinformation campaigns in the 2020 election.

Deepfake tech is mainly being used for porn and women are the main victims Deepfakes pose a serious threat to democracy in the long run but women are likely to suffer first, a new study says.

Lawmakers say Facebook's recent promise to remove "misleading manipulated media" ahead of the US 2020 presidential election doesn't go far enough to combat misinformation. 

Facebook this week revealed it would take down deepfake videos if they were doctored in a way that could "mislead someone into thinking that a subject of the video said words that they did not actually say". 

However, Facebook's criteria still allows for the manipulated video of Democratic House Speaker Nancy Pelosi appearing to slur during a speech last year, but didn't change what the politician said.

SEE: A winning strategy for cybersecurity (ZDNet special report) | Download the report as a PDF (TechRepublic)

At yesterday's hearing held by the House Energy & Commerce subcommittee, chairwoman of the subcommittee Jan Schkowsky said there is growing evidence that tech firms have failed to self-regulate, Reuters reports.  

"I am concerned that Facebook's latest effort to tackle misinformation leaves a lot out," she said.

Lawmakers directed questions about Facebook's inability to handle misinformation at Monika Bickert, Facebook's vice president of global policy management, who testified at the hearing. 

Asked by Florida Democrat congressman Darren Soto why Facebook wouldn't simply just remove the fake Pelosi video, Bickert said Facebook wanted to give users a way of "contextualizing" such videos using 'false information' labels. 

"Our approach is to give people more information so that if something is going to be in the public discourse they will know how to assess it, how to contextualize it," said Bickert. 

Bickert noted that the Pelosi video was labeled false at the time but she admitted Facebook was too slow to have it parsed by fact-checkers who decide whether a video should be given a 'false' tag. 

"It was labeled false at the time," said Bickert. "We think we could have gotten that to fact-checkers faster and we think the label that we put on it could have been more clear," she added. 

"We now have the label for something that has been rated false, you have to click through it so it actually obscures the image and it says 'false information' and it says this has been rated 'false' by fact-checkers. You click through it and you see information from the fact-checking source."

Soto asked Bickert whether Facebook could still be used by third parties to mobilize people to attend a fake political rally like the fake Trump rally that Russian operatives organized in 2016 in Florida.

"Our enforcement is not perfect. However, I think we have made huge strides and I think that is shown by the dramatic increase in the number of networks that we've removed," said Bickert, after explaining Facebook had taken down 50 networks in 2019 compared with just one in 2016.

SEE: Fake reviews: Facebook and eBay ban dozens of groups after watchdog probe

Bickert gave her testimony days after an internal memo from top Facebook exec Andrew Bosworth was published in the New York Times. Bosworth, who is not a Trump fan, said Trump "just did unbelievable work" in the 2016 campaign and defended Facebook's decision to maintain the same ad policies. 

Facebook has been criticized for allowing lies in political ads. Bosworth said these policies "very well may lead" to Trump's re-election, but that Facebook should not use "tools available to us to change the outcome".

"If we change the outcomes without winning the minds of the people who will be ruled, then we have a democracy in name only. If we limit what information people have access to and what they can say then we have no democracy at all," he wrote

More on deepfakes and security

  • Facebook: We'll ban deepfakes but only if they break these rules  
  • Facebook, Microsoft, AWS: We want you to take up the deepfake detection challenge  
  • War on deepfakes: Amazon backs Microsoft and Facebook with $1m in cloud credits  
  • Deepfakes: For now women, not democracy, are the main victims  
  • Facebook, Microsoft: We'll pay out $10m for tech to spot deepfake videos
  • Forget email: Scammers use CEO voice 'deepfakes' to con workers into wiring cash
  • 'Deepfake' app Zao sparks major privacy concerns in China
  • AI, quantum computing and 5G could make criminals more dangerous than ever, warn police
  • Samsung uses AI to transform photos into talking head videos
  • Facebook's fact-checkers train AI to detect "deep fake" videos
  • The lurking danger of deepfakes TechRepublic
  • These deepfakes of Bill Hader are absolutely terrifying CNET