X
Innovation

Facebook, Microsoft: We'll pay out $10m for tech to spot deepfake videos

Facebook will create its own deepfake videos to help build a system that can detect them.
Written by Liam Tung, Contributing Writer

If AI-generated 'deepfake' videos are going to spread like wildfire and cause social chaos in the future, it's probably going to happen on Facebook or Facebook-owned WhatsApp and Instagram. 

To boost the chances of technology being used to detect deepfake video when humans can't, the social network and advertising giant has launched the Deepfake Detection Challenge, offering $10m in research grants and rewards. 

The effort is also being backed by the Partnership on AI, Microsoft, and academics from Cornell Tech, MIT, University of Oxford, UC Berkeley, University of Maryland, College Park, and State University of New York at Albany.  

SEE: How to implement AI and machine learning (ZDNet special report) | Download the report as a PDF (TechRepublic)

Facebook has in the past conducted mind experiments on users without gaining their consent. But this time the company stresses no Facebook user data will be used. Instead, it is "commissioning a realistic dataset that will use paid actors, with the required consent obtained, to contribute to the challenge". 

The aim is to create tech that everyone can use to detect when a video has been manipulated with AI.      

However, to do that, it needs to have a larger dataset of deepfake content to work with, and so far the industry doesn't have it or a benchmark for detecting deepfakes, according to Facebook. 

So, Facebook is going to help create that dataset of deepfake video and audio with paid actors using the latest deepfake techniques. 

Facebook shows a side-by-side demo video of a real actor speaking next to one it created using other actors. 

But as a recent CEO fraud case demonstrated, deepfake video used to disrupt society is not the only threat. A UK CEO was recently duped by deepfake audio of his superior into wiring $243,000 to a fraudster's account in a new twist on the lucrative business email compromise fraud.  

Facebook's deepfake detection drive follows the Defense Advanced Research Projects Agency's (DARPA) latest efforts at building detection systems using 'semantic forensics' or SemaFor. 

Announcing a new tender at the end of August, DARPA notes the connection between media manipulation and social media when it comes to the threat of disinformation causing unrest in the real world. 

SEE MORE: To Catch a Fake: Machine learning sniffs out its own machine-written propaganda

The new program is focusing on spotting semantic errors that often occur in models based on training data.     

"There is a difference between manipulations that alter media for entertainment or artistic purposes and those that alter media to generate a negative real-world impact," said Dr Matt Turek, a program manager in DARPA's Information Innovation Office (I2O).

"The algorithms developed on the SemaFor program will help analysts automatically identify and understand media that was falsified for malicious purposes." 

facebookdeepfakesept19.jpg

Facebook shows a side-by-side demo video of a real actor speaking next to one it created using other actors.

Image: Facebook

More on deepfakes and security

Editorial standards