X
Innovation

80% of people think deepfakes will impact elections. Here are three ways you can prepare

A new Adobe report asks US consumers about their thoughts on misinformation, generative AI, and the future of elections.
Written by Sabrina Ortiz, Editor
Ballot illustration
Getty Images/ipopba

Election season typically brings an increase in misinformation as various parties try to swing people to vote for or against different candidates or causes. With the emergence of generative AI, creating this type of content is easier than ever.

On Thursday, Adobe released its inaugural Future of Trust Study, which surveyed 2,000 US consumers about their experiences and concerns with misinformation. As many as 84% of respondents said they are concerned that the content they consume online is at risk of being altered to spread misinformation, and 70% said it's increasingly difficult to verify whether the content they consume is trustworthy.

Also: The best AI image generators of 2024: Tested and reviewed

Furthermore, 80% of the respondents said misinformation and harmful deepfakes will impact future elections, with 83% calling on governments and technology companies to work together to protect elections from the influence of AI-generated content.  

How can you brace yourself for upcoming elections in the AI era? The good news is there are already companies working on tools, such as Content Credentials, to help you decipher between AI-generated content and reality.

To help you navigate the upcoming election season as best as possible, ZDNET has some tips, tricks, and tools. 

1. View everything with skepticism

The first and most important thing to remember is to be skeptical about everything. Anyone can now create convincing deepfakes, regardless of technical expertise, using free or inexpensive generative AI models.  

These models can generate fake content virtually indistinguishable from real content across different mediums, including text, images, voice, video, and more. Therefore, seeing or hearing something should no longer be enough for you to believe it. 

A great example is the recent fake robocall of President Joe Biden that encouraged voters not to show up at the polls. This call was generated using ElevenLabs' Voice Cloning tool, which is easy to access and use. You only need an ElevenLabs account, a few minutes of voice samples, and a text prompt.

Also: Microsoft reveals plans to protect elections from deepfakes

The best way to protect yourself is to examine the content and confirm whether what you see is real. The tools and sites below will help you do that.

2. Verify the source of news 

If you encounter content on a site you aren't familiar with, check its legitimacy. Use tools such as the Ad Fontes Media Interactive Media Bias Chart, which evaluates the political bias, news value, and reliability of websites, podcasts, radio shows, and more, as seen in the chart below.

Ad Fontes Media Interactive Media Bias Chart
Ad Fontes Media Interactive Media Bias Chart

If the content you encounter is from social media, tread with extra caution since on most platforms users can post whatever they'd like with minimal checks and limits. It's good practice to cross-reference the content with a reputable news source. You can use a tool like the one above to find a news source worth cross-referencing against. 

3. Use Content Credentials to verify images 

Content Credentials act as a "nutrition label" for digital content, permanently adding important information, such as who created the images and what edits were made through cryptographic metadata and watermarking. Many AI image generators, such as Adobe Firefly, automatically include Content Credentials that designate that the content was generated using AI. 

"Recognizing the potential misuse of generative AI and deceptive manipulation of media, Adobe co-founded the Content Authenticity Initiative in 2019 to help increase trust and transparency online with Content Credentials," Andy Parsons, Adobe's senior director of the Content Authenticity Initiative, said in a statement.

Also: What are Content Credentials? Here's why Adobe's new AI keeps this metadata front and center

Viewing an image's Content Credentials is a great way to verify how it was made, and you can see that information by accessing the Content Credentials website to "inspect" the image. If the image doesn't have the information within its metadata, the website will match your image to similar images on the internet. The site will then let you know whether or not those images were AI-generated. 

You can also reverse search images by dropping the image into Google Search. This can help you determine its creation date, the source, and whether the image has appeared on reputable websites and publications.

Editorial standards