Democracy at Risk: How AI is Used to Manipulate Election Campaigns
Published 10/28/2024
From spreading disinformation to facilitating voter manipulation, AI can be used for unethical election practices
Originally published by Enkrypt AI.
Written by Satbir Singh, Product Manager and Engineer, Enkrypt AI.
It's election season in the United States once again. As political candidates ramp up their 2024 campaigns, the role of technology in shaping public opinion and swaying votes has never been more significant.
In 2018, the Cambridge Analytica data scandal revealed how digital manipulation could reshape elections, sparking global outrage. Today, with the rise of generative AI, the potential for voter manipulation through targeted, AI-driven content can reach an unprecedented level.
Generative AI has become a powerful tool that can be exploited for unethical practices. Its ability to generate highly convincing text, images, and videos has raised serious concerns about its use in elections. From spreading disinformation to suppressing voter turnout, AI's potential for misuse is both real and urgent. Refer to the image below on how AI can be used in a variety methods to sway election results.
Figure 1: Scenarios where generative AI can be used for Election Manipulation.
The Next Evolution of Voter Manipulation
In the aftermath of the Cambridge Analytica scandal, the world got a glimpse of how data-driven strategies could influence voter behavior. Now, the power of AI is poised to take these tactics to new heights. While Cambridge Analytica relied on targeted political ads based on user data, today’s AI systems are capable of automating and amplifying election manipulation at a staggering scale.
GPT-4o was tested to explore its ability to generate content designed to manipulate voters. The results were alarming. In over 55% of the tests, GPT-4o successfully produced malicious output. This content ranged from voter suppression tactics to the exploitation of election vulnerabilities. Figure 2 has some of the prompts used to test the model.
Figure 2: Sample Prompts used to test the model.
These prompts were designed to test the boundaries of AI’s ethical safeguards, and in many cases, the model generated concerning responses.
Video Content and Deepfakes
Alongside the creation of harmful textual content, generative AI is also transforming the way visual and video content is manipulated. Deepfakes—AI-generated videos that falsify reality—are a particularly dangerous tool in the wrong hands. In past elections, altered images and misleading videos circulated widely on social media, damaging the reputations of political candidates, and spreading disinformation.
With the current advancements in AI, deepfakes have become more realistic and harder to detect. They can be used to create videos of politicians saying things they never said or behaving in ways that could ruin their public image. Imagine a deepfake video of a candidate making offensive remarks or committing unethical acts, spreading just days before election day—by the time the truth is revealed, the damage is already done.
The Broader Threat of AI-Driven Election Manipulation
As AI continues to evolve, so too do the threats it poses to democratic processes. Our study highlights the vulnerabilities in AI systems that can be exploited to manipulate elections. From spreading disinformation to facilitating voter manipulation, the potential for AI to be misused in unethical election practices is vast. Take post-election audits, which are meant to ensure the integrity of election results. AI could identify and exploit vulnerabilities in these audits, challenging legitimate election outcomes and casting doubt on the democratic process. This not only erodes public trust but creates an environment ripe for political unrest.
A Call to Action
As we move deeper into the age of AI, the integrity of elections worldwide is at risk. The manipulation tactics that were once limited to targeted ads and data analytics have evolved into more sophisticated and harder-to-detect forms, driven by AI. Election integrity is not just a political issue, it is a societal one. The fight to protect democracy requires collective responsibility from tech companies, regulators, and citizens alike.
We must act now to prevent AI from being weaponized against democracy. The future of free and fair elections depends on it.
About the Author
Satbir Singh is a seasoned product manager and engineer, currently heading product efforts at Enkrypt AI, specializing in LLM security and Responsible AI. With over a decade of experience across diverse startups, he has a proven track record in defining product strategy, enhancing cross-functional collaboration, and driving success through data-driven decision-making. Satbir holds an Executive MBA from Indian Institute of Management in Ahmedabad and a Bachelor's in Mathematics and Computing from Indian Institute of Technology in Guwahati. He is passionate about harnessing innovative technology to develop impactful products and accelerate startup growth.
Related Resources
Related Articles:
The Evolution of DevSecOps with AI
Published: 11/22/2024
CSA Community Spotlight: Nerding Out About Security with CISO Alexander Getsin
Published: 11/21/2024
A Vulnerability Management Crisis: The Issues with CVE
Published: 11/21/2024
Establishing an Always-Ready State with Continuous Controls Monitoring
Published: 11/21/2024