X
Innovation

Facebook, Microsoft, AWS: We want you to take up the deepfake detection challenge

Facebook provides 100,000 AI-manipulated videos for researchers to develop better deepfake detection systems.
Written by Liam Tung, Contributing Writer

Facebook has announced the launch of the Deepfake Detection Challenge, an effort backed by several universities, AWS, and Microsoft. 

The organizations announced the challenge in September, committing $10m in grants and rewards for research that could help create detection systems for AI-generated deepfake videos. 

Facebook said at the time it would help create a dataset of deepfake video and audio with paid actors using the latest deepfake techniques. The dataset would help address the shortage of data to work with to create detection systems, as well as to benchmark their effectiveness. 

SEE: Special report: How to automate the enterprise (free ebook)    

Facebook launched the challenge at the Conference on Neural Information Processing Systems (NeurIPS) and is offering entrants access to a unique data set of 100,000-plus videos that were created to aid research on deepfakes. 

Google in September also contributed 3,000 new videos using paid actors to help improve detection techniques. A month later Amazon Web Services backed the Deepfake Detection Challenge with $1m in cloud credits.  

Deepfake videos and audio have caused alarm among policymakers for their potential to sow discord among different groups, particularly around democratic processes like the US 2020 presidential elections, but also lower-profile elections across the globe.  

"Ensuring that cutting-edge research can be used to detect deepfakes depends on large-scale, close-to-reality, useful, and freely available datasets. Since that resource didn't exist, we've had to create it from scratch," said Cristian Canton Ferrer, a Facebook AI Research Manager leading the project.  

"The resulting data set consists of more than 100,000 videos featuring paid actors in realistic scenarios, with accompanying labels describing whether they were manipulated with AI."  

Google-owned data-science crowdsourcing platform Kaggle has signed up to host the Deepfake Detection Challenge leaderboard and challenge itself, which will run through to the end of March 2020. 

SEE: Twitter is asking for your help in the battle against deepfakes

Researchers face several key deadlines throughout March before the private leaderboard in the competition is revealed, which is estimated to occur on April 22, 2020. 

It's not clear how many participants will receive awards, but the top Kaggle prize is $500,000, followed by a second prize of $300,000, and a third prize of $100,000. The fourth and fifth prizes are $60,000 and $40,000, respectively. 

According to Facebook, its AI researchers have used multiple techniques to swap faces of subjects and alter their voices from the original videos. On some videos it applied 'augmentations' to imitate real-life video degrade after being shared online.

But the company is also on the look-out for full body swaps, Ferrer told IEEE Spectrum. However, these body-swap videos aren't as advanced as face swaps.

facebookdeepfake.jpg

Facebook says an example video from the Deepfake Detection Challenge dataset shows the unaltered image, left, and a deepfake. right.  

Image: Facebook

More on deepfakes and security

  • War on deepfakes: Amazon backs Microsoft and Facebook with $1m in cloud credits  
  • Deepfakes: For now women, not democracy, are the main victims  
  • Facebook, Microsoft: We'll pay out $10m for tech to spot deepfake videos
  • Forget email: Scammers use CEO voice 'deepfakes' to con workers into wiring cash
  • 'Deepfake' app Zao sparks major privacy concerns in China
  • AI, quantum computing and 5G could make criminals more dangerous than ever, warn police
  • Samsung uses AI to transform photos into talking head videos
  • Facebook's fact-checkers train AI to detect "deep fake" videos
  • The lurking danger of deepfakes TechRepublic
  • These deepfakes of Bill Hader are absolutely terrifying CNET 
  • Editorial standards