X
Innovation

Tech giants promise to combat fraudulent AI content in mega elections year

Tech companies like Google, Meta, and OpenAI have signed an accord to filter through deceptive AI-generated content in 2024, a year with more elections than any other in history.
Written by Eileen Yu, Senior Contributing Editor
Screens of AI technology
dem10/Getty Images

Google, Meta, OpenAI, and X (formerly Twitter) are among 20 technology companies that have pledged to weed out fraudulent content generated by artificial intelligence (AI), as part of efforts to safeguard global elections expected to take place this year. 

The band of companies signed the Tech Accord to Combat Deceptive Use of AI in 2024 Elections at the Munich Security Conference, which outlines a "voluntary framework of principles and actions" covering the prevention, detection, response, evaluation, and identifying the source of deceptive AI election content.

Also: The ethics of generative AI: How we can harness this powerful technology

It also includes efforts to raise public awareness of how to protect themselves from being manipulated by such content, according to a joint statement released by the tech accord signatories, which also include TikTok, Amazon, IBM, Anthropic, and Microsoft. 

With the accord, the 20 organizations promise to observe eight mission statements, including seeking to detect and prevent the distribution of deceptive AI election content and providing transparency to the public on how it addresses such content. They will work together to develop and implement the tools to identify and curb the spread of the content as well as track the origins of such content. 

These efforts can include developing classifiers or provenance methods and standards, such as watermarking or signed metadata, and attaching machine-readable information to AI-generated content. 

The eight commitments will apply where relevant to the services each company provides. 

Also: Elections 2024: How AI will fool voters if we don't do something now

The accord covers content defined as "convincing" AI-generated audio, video, and images that "deceptively fake or alter the appearance, voice, or actions" of political candidates, election officials, and other key stakeholders in an election, or that push fraudulent information to the public about where, when, and how to vote.

"2024 will bring more elections to more people than any year in history, with more than 40 countries and more than four billion people choosing their leaders and representatives," the tech accord states. "At the same time, the rapid development of AI is creating new opportunities as well as challenges for the democratic process. All of society will have to lean into the opportunities afforded by AI and to take new steps together to protect elections and the electoral process during this exceptional year."

Also: We're not ready for the impact of generative AI on elections

The accord aims to set expectations for how the signatories will manage risks arising from deceptive AI election content created via their public platforms or open foundational models, or distributed on their social and publishing platforms. These are in line with the signatories' own policies and practices.

Models or demos intended for research purposes or primarily for enterprise use are not covered under the accord. 

The signatories added that AI can be leveraged to help defenders counter bad actors and enable swifter detection of deceptive campaigns. AI tools also can significantly lower the overall cost of defense, allowing smaller organizations to implement robust protections. 

"We are committed to doing our part as technology companies, while acknowledging that the deceptive use of AI is not only a technical challenge, but a political, social, and ethical issue and hope others will similarly commit to action across society," the signatories said. "We affirm that the protection of electoral integrity and public trust is a shared responsibility and a common good that transcends partisan interests and national borders."

Also: Want to work in AI? How to pivot your career in 5 steps

Christoph Heusgen, chairman of the Munich Security Conference, said the accord is a "crucial step" in advancing election integrity and societal resilience. It also will help create "trustworthy tech practices," he said.

Associated risks from AI-powered misinformation on societal cohesion will dominate the landscape this year, according to the Global Risks Report 2024 released last month by the World Economic Forum (WEF). The report lists misinformation and disinformation as the leading global risk over the next two years, warning that its widespread use as well as the tools to disseminate it could undermine the legitimacy of new incoming governments. 

Editorial standards