In an effort to combat the prevalence of deepfakes, Microsoft has launched a new video authenticator tool, which can analyse a still photo or video to provide a percentage of the chance that a piece of media is artificially manipulated.
In the case of a video, Microsoft said it could provide this percentage in real time for each frame as the video plays. It works by detecting the blending boundary of the deepfake and subtle fading or greyscale elements that might not be detectable by the human eye.
Deepfakes, or synthetic media, can be photos, videos, or audio files manipulated by artificial intelligence (AI). Microsoft said detection of deepfakes is crucial in the lead up to the US election.
See also: Deepfakes' threat to 2020 US election isn't what you'd think (CNET)
The tech was created using a public dataset from Face Forensic++ and Microsoft said it was tested on the DeepFake Detection Challenge Dataset, which it considers to be a leading model for training and testing deepfake detection technologies.
"We expect that methods for generating synthetic media will continue to grow in sophistication. As all AI detection methods have rates of failure, we have to understand and be ready to respond to deepfakes that slip through detection methods," the company said in a blog post.
"Thus, in the longer term, we must seek stronger methods for maintaining and certifying the authenticity of news articles and other media."
With few tools available to do this, Microsoft has also unveiled a new technology it said can both detect manipulated content and assure people that the media they're viewing is authentic.
The tech has two components, with the first being a tool built into Microsoft Azure that enables a content producer to add digital hashes and certificates to a piece of content.
"The hashes and certificates then live with the content as metadata wherever it travels online," Microsoft explained.
The second is a reader, which can be included in a browser extension, that checks the certificates and matches the hashes to determine authenticity.
In its deepfake fight, Microsoft has also partnered with the AI Foundation. The partnership will see the two parties make the video authenticator available to organisations involved in the democratic process, including news outlets and political campaigns through the foundation's Reality Defender 2020 initiative.
The video authenticator will initially be available only through the initiative.
Another partnership with a consortium of media companies, known as Project Origin, will see Microsoft's authenticity technology tested. An initiative from a number of publishers and social media companies, the Trusted News Initiative, have also agreed to engage with Microsoft on testing its technology.
The University of Washington, deepfake detection firm Sensity, and USA Today have also joined Microsoft to boost media literacy.
"Improving media literacy will help people sort disinformation from genuine facts and manage risks posed by deepfakes and cheap fakes," Microsoft said. "Practical media knowledge can enable us all to think critically about the context of media and become more engaged citizens while still appreciating satire and parody."
Through the partnership, there will be a public service announcement campaign encouraging people to take a "reflective pause" and check to make sure information comes from a reputable news organisation before they share or promote it on social media ahead of the election.
The parties have also launched a quiz for US voters to learn about synthetic media.
- Google's war on deepfakes: As election looms, it shares ton of AI-faked videos
- Lawmakers to Facebook: Your war on deepfakes just doesn't cut it
- Facebook: We'll ban deepfakes but only if they break these rules
- Twitter: We'll kill deepfakes but only if they're harmful
- Facebook, Microsoft, AWS: We want you to take up the deepfake detection challenge
- This AI can discover the hidden links between great works of art
- The lurking danger of deepfakes (TechRepublic)