California takes on deepfakes in porn and politics

The state hopes new laws will stop the use of deepfake technology to manipulate political speech and sexual content.

Facebook offers $10 million in research grants for tech to detect deepfake videos Facebook will create its own deepfake videos to help build a system that can detect them.

The state of California has signed a set of laws to crack down on the use of deepfake technologies in political discourse and pornographic content. 

The term 'deepfake' is relatively new, but despite its fledgling stage, it has gained the attention of lawmakers. Deepfaking is the use of technology including machine learning algorithms or what we currently classify as artificial intelligence (AI) to insert fake content into videos or to manipulate audio, including voice patterns. 

Sometimes, deepfakes are used purely for satire, such as to poke fun at government officials and the current political landscape. 

However, AII-manipulated content can take a dark turn when used to generate political propaganda able to fool the general public. 

Pornography, too, has become an emerging area for the use of deepfaking. A recent study into the practice found that women make up the majority of victims in deepfake porn, also known as "involuntary porn." 

In these cases, their faces are inserted into existing pornographic content, of which the consequences may be similar in the real world to revenge porn -- a humiliated victim and impacted family, friends, and workplaces. 

As previously reported by ZDNet, Deeptrace's report says that generative adversarial networks (GANs) are being used to generate deepfakes and the number of recorded deepfake videos has climbed from 8,000 in December 2018 to over 14,600 at the time of writing. In total, 96 percent of all deepfake videos are pornographic in nature. 

See also: Deepfakes: For now women are the main victim, not democracy

California's Gov. Gavin Newsom has signed two laws, Assembly Bills 730 and 602, in preparation for the battle against deepfakes and the potential political consequences of content tampering may have on future elections. 

The first bill, AB730, proposes a ban on the production and distribution of "malicious" or "materially deceptive" deepfake political material, including the superimposition of images on campaign material such as audio and video until at least 2023. 

"This bill would [...] prohibit a person, committee, or other entity, within 60 days of an election at which a candidate for elective office will appear on the ballot, from distributing with actual malice materially deceptive audio or visual media of the candidate with the intent to injure the candidate's reputation or to deceive a voter into voting for or against the candidate, unless the media includes a disclosure stating that the media has been manipulated," the bill reads. 

However, allowances are made for deepfake material creation when satire or parody is involved, and as long as it has been made clear that such content has been manipulated through public disclosure.  

The second bill, AB602, is focused on giving citizens the right to fight back if they become the victims of deepfake manipulation, especially in cases of sexually explicit material.

CNET: A new Instagram feature helps you identify phishing emails

While existing legislation does provide a legal route for those whose private photographs were shared without consent, such as for the purpose of revenge porn, the bill adds a potential cause of action against those who "create and intentionally discloses sexually explicit material if the person knows or reasonably should have known the depicted individual did not consent to its creation or disclosure," as well as for those who share "sexually explicit material that the person did not create" or consent to. 

The ramifications of deepfake content have not gone unnoticed by technology vendors, either. 

Facebook and Microsoft announced a new project in September called the Deepfake Detection Challenge, dangling rewards and grants worth $10 million to academics willing to pitch and develop solutions for the automatic detection of deepfake videos. 

In order to provide a dataset for academics to work with, the social networking giant has pledged to pay actors to develop both deepfake video and audio. 

Google, too, is keen to get in on the action. In the same month, the tech giant released a database containing 3,000 AI-manipulated videos to academics which also made use of paid actors. 

TechRepublic: How to create and export a GPG keypair on macOS

The database has been contributed to the FaceForensics benchmark, a standard being developed by the Technical University of Munich and the University Federico II of Naples with the hope of becoming the accepted benchmark for detecting deepfakes in the future. 

Previous and related coverage


Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0