Table Of Contents
The Growing Influence of AI in Elections
As we approach the 2024 presidential election between VP Kamala Harris and Former US President Donald Trump, the United States finds itself grappling with a new kind of electoral interference—artificial intelligence (AI). The Previous incident in New Hampshire, where an AI-generated robocall mimicked President Joe Biden urging voters to stay home, marks one of the first documented uses of AI to meddle in a U.S. election. This event has underscored a significant concern: Is the US prepared for AI’s burgeoning influence on the electoral process?
AI-Generated Disinformation: A Global Preview
The New Hampshire robocall incident, traced back to Texas-based companies Life Corporation and Lingo Telecom, may be just the tip of the iceberg. Lisa Gilbert, Executive Vice-President of Public Citizen, emphasizes that the mere existence of such disinformation efforts is alarming, irrespective of their immediate impact on voter turnout.
Globally, we see a disturbing trend. In Slovakia, fake audio recordings have swayed elections, and in Indonesia, AI-generated avatars have reshaped political perceptions. India has also witnessed AI resurrecting deceased politicians to support current officials. These international examples serve as a stark warning for what the United States might encounter in its 2024 elections.
Regulatory Lag: The Struggle to Keep Pace
Despite these clear threats, the U.S. regulatory framework is lagging. Following the New Hampshire incident, the Federal Communications Commission (FCC) banned AI-generated robocalls. However, the Federal Election Commission (FEC) has yet to establish comprehensive rules governing AI in political ads. Some states are attempting to bridge this regulatory gap, but the pace is uneven.
A bipartisan task force in the U.S. House is exploring AI regulation, yet partisan gridlock and the rapid evolution of AI technologies pose significant hurdles. As Gilbert notes, “We need to move really fast,” to prevent a “wild west” scenario where AI-driven disinformation could run rampant.
The Deceptive Power of AI
AI’s potential for deception is staggering. From deepfake videos and audio to AI-generated images, the technology can fabricate events and statements with alarming realism. Audio content, in particular, is exceptionally manipulative. Experts warn that AI-generated calls could mimic the voices of family members, making the deception even more convincing—a tactic already known as the “grandparent scam.”
Even seemingly benign uses of AI, like creating audio messages from the voices of mass shooting victims to advocate for gun control, or crafting alternate reality campaign ads, can blur the line between reality and fabrication. Chester Wisniewski, a cybersecurity expert at Sophos, aptly describes this as moving from “handcraft artisanal election disinformation” to mass-produced deceit.
AI and Voter Trust: A Crisis in the Making
The erosion of trust is perhaps the most insidious effect of AI in elections. As AI-generated content becomes more sophisticated, voters may start to question the authenticity of everything they see and hear. This pervasive doubt can undermine informed democratic decision-making, creating a skeptical environment where truth and falsehood are indistinguishable.
Katie Harbath, formerly of Facebook, highlights this dilemma: “There’s a difference between what AI might do and what AI is actually doing.” The mere possibility of AI-generated disinformation can make voters second-guess their perceptions, eroding the foundation of trust essential for democracy.
Corporate Responsibility and the Path Forward
In the absence of robust government regulation, tech companies are stepping in with self-imposed measures. Google, Meta, Microsoft, and OpenAI have pledged to implement “reasonable precautions,” such as additional labeling of AI-generated political content and directing users to trusted voting information. However, these efforts may not be sufficient to counteract determined bad actors.
At the state level, some regulations now mandate clear disclosures in political ads that use AI. The FEC has initiated a rule-making process, expected to conclude by summer, to establish federal guidelines. The urgency is clear: without swift and comprehensive action, the 2024 elections could see unprecedented levels of AI-driven disinformation.
The First AI Election: A Looming Challenge
The 2024 presidential election is poised to be America’s first “AI election,” with AI’s influence permeating every stage of the electoral process. From crafting campaign messages to sowing disinformation, AI’s capabilities present unique challenges that demand immediate attention.
Campaign Legal Center (CLC) and other advocacy groups are working tirelessly to educate the public and recommend policy solutions to mitigate AI’s risks. They emphasize the dangers of deepfakes and other AI-generated falsehoods that could mislead voters and disrupt election administration.
In conclusion, while AI holds great promise for various applications, its potential misuse in elections poses a grave threat to democratic integrity. As the U.S. navigates this new terrain, swift regulatory action, public education, and corporate responsibility will be crucial in safeguarding the electoral process against the disruptive power of AI.
References of our story:
The Guardian, ABC, Seta, CLC