The Year of the AI Election Wasn’t Quite What Everyone Expected

avatar
WIRED
a day ago

In the spring, the US saw what was likely its first AI candidate. In a brief campaign for the mayor of Wyoming, virtual integrated citizen (VIC), a ChatGPT-based bot created by real human Victor Miller, promised to govern entirely by AI.

At the outset of 2024, many suggested that even if not winning office, generative AI would play a pivotal role in—and pose significant risks to—democratic elections, as more than 2 billion people voted in more than 60 countries. But now, experts and analysts have changed their tune, saying that generative AI likely had little to no effect at all.

So were all those prognostications that 2024 would be the AI election year wrong? The truth is … not really. Experts who spoke to WIRED say that this might have still been the “AI election”—just not in the way many expected.

For starters, much of the hype around generative AI was focused specifically on the threat of deepfakes, which experts and pundits worried had the potential to flood the already murky information space and fool the public.

“I think concern about misleading deepfakes was taking up a lot of oxygen in the room, when it comes to AI,” says Scott Brennen, director of the Center for Technology Policy at New York University. But Brennen says that many campaigns were actually hesitant to use generative AI to create deepfakes, particularly of opponents, because the technology was complicated to use. In the US, some worried they might run afoul of a new bevy of state-level laws that restrict “deceptive” deepfake ads or require disclosure when AI is used in political advertisements.

“I don't think that any campaign or politician or advertiser wants to be a test case, particularly because the way these laws are written, it’s sort of unclear what ‘deceptive’ means,” says Brennen.

Earlier this year, WIRED launched the AI Elections Project to track instances of AI in elections around the world. An analysis of the WIRED AI Elections Project published by the Knight First Amendment Institute at Columbia University found that about half the instances of deepfakes weren’t necessarily intended to be deceptive. This mirrors reporting from The Washington Post finding that deepfakes didn’t necessarily mislead people or change their minds, but did deepen partisan divisions.

Many pieces of AI-generated content were used to express support for or fandom of certain candidates. For instance, an AI-generated video of Donald Trump and Elon Musk dancing to the BeeGees song “Stayin’ Alive” was shared millions of times on social media, including by Senator Mike Lee, a Utah Republican.

“It's all about social signaling. It's all the reasons why people share this stuff. It's not AI. You're seeing the effects of a polarized electorate,” says Bruce Schneier, a public interest technologist and lecturer at the Harvard Kennedy School. “It's not like we had perfect elections throughout our history and now suddenly there’s AI and it's all misinformation.”

But don’t get it twisted—there were misleading deepfakes that spread during this election. For instance, in the days before Bangladesh’s elections, deepfakes circulated online encouraging supporters of one of the country’s political parties to boycott the vote. Sam Gregory, program director of the nonprofit Witness, which helps people use technology to support human rights and runs a rapid-response detection program for civil society organizations and journalists, says that his team did see an increase in cases of deepfakes this year.

“In multiple election contexts,” he says, “there have been examples of both real deceptive or confusing use of synthetic media in audio, video, and image format that have puzzled journalists or have not been possible for them to fully verify or challenge.” What this reveals, he says, is that the tools and systems currently in place to detect AI-generated media are still lagging behind the pace at which the technology is developing. In places outside the US and Western Europe, these detection tools are even less reliable.

“Fortunately, AI in deceptive ways was not used at scale in most elections or in pivotal ways, but it's very clear that there's a gap in the detection tools and access to them for the people who need it the most,” says Gregory. “This is not the time for complacency.”

The very existence of synthetic media at all, he says, has meant that politicians have been able to allege that real media is fake—a phenomenon known as the “liar’s dividend.” In August, Donald Trump alleged that images showing large crowds of people turning out to rallies for Vice President Kamala Harris were AI-generated. (They weren’t.) Gregory says that in an analysis of all the reports to Witness’ deepfake rapid-response force, about a third of the cases were politicians using AI to deny evidence of a real event—many involving leaked conversations.

But Brennen says that the more significant use of AI this past year happened behind the scenes, in subtler, less sexy ways. “While there were fewer misleading deepfakes than many feared, there was still a lot of AI happening behind the scenes,” says Brennen. “I think we have been seeing a lot more AI writing copy for emails, writing copy for ads in some cases, or for speeches.” But because these kinds of uses for generative AI are not as consumer-facing as deepfakes, Brennen says, it’s hard to know exactly the scale at which these tools were used.

Schneier says that AI actually played a large role in the elections, including “language translation, canvassing, assisting in strategy.”

During the elections in Indonesia, a political consulting firm used a tool built on OpenAI’s ChatGPT to write speeches and draft campaign strategies. In India, Prime Minister Narendra Modi used an AI translation software to translate his speeches into several of the languages spoken in India in real time. And Schneier says that these uses of AI have the potential to be good for democracy overall, allowing more people to feel included in the political process and helping small campaigns access resources that would otherwise be out of reach.

“I think we’ll see the most impact for local candidates,” he says. “Most campaigns in this country are tiny. It's a person who's running for a job that may not even be paid.” AI tools that could help candidates connect with voters or file paperwork, says Schneier, would be “phenomenal.”

Schneier also notes that AI candidates and spokespeople can help protect real people and opposition candidates in repressive states. Earlier this year, Belarusian dissidents in exile ran an AI candidate as a protest symbol against president Alexander Lukashenko, Europe’s last dictator. Lukashenko’s government has arrested journalists and dissidents, as well as their relatives.

And for their part, generative AI companies already entered the mix with US campaigns this year. Both Microsoft and Google provided trainings to several campaigns about how to use their products during the election.

“It may not be the year of the AI election yet, because these tools are just starting,” says Schneier. “But they are starting.”

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
1
Add to Favorites
Comments