AI’s Potential Impact on the 2024 Election Goes Beyond Deepfakes
Earlier this week, President Joe Biden announced his intention to run for re-election in 2024, and the Republican National Committee (RNC) has responded with a negative ad, imagining the dystopian future awaiting us should he secure a second term. The video is made up entirely of images created by generative AI software, appearing to show President Biden and VP Harris’ victory celebration, followed by nightmarish images of the immediate consequences for Americans. (These include a flood of new immigrants arriving at the southern border, various international conflicts, and a new financial crisis.) It’s the RNC’s first entirely AI-generated ad in history, and one of the first major political ads created by a generative AI app in U.S. history.
While some of the ad’s images look fairly lifelike, there remains an uncanny surreality to many shots. The creators of the ad have employed this purposefully, suggesting not verifiable reality but an imagined, “dark” future if the Republican candidate utlimately loses the election. The current state of generative AI is rather ideal for designing a dystopian near-future, rather than an entirely credible and compelling vision of our own world.
Still, the existence of an AI-generated political ad – a year and a half before Americans actually go to the polls – serves as something of a canary in the coal mine moment. Whether or not AI apps will impact politics is no longer a purely theoretical question: the technology is here right now, and it’s already making a difference.
The Growing Concerns of AI-Created Misinformation in Elections
Not surprisingly, the ability of generative AI apps to create credible fake images, audio clips, or even videos has received most of the attention. Putting words in a candidate’s mouth or depict them in a compromising scenario obviously has a lot of psychic power. If AI fakes were believable enough, and spread far enough before being discredited, they could theoretically sway an election entirely on their own.
Many of these concerns pre-date the 2023 explosion of interest in AI apps. In 2020, and even as far back as 2016, voters were warned about the potential of social media platforms and “deepfakes” to spread misinformation about candidates and their positions on the issues. The technology has only become more sophisticated over time; nothing like the new RNC ad was remotely possible when the first warnings about deepfakes appeared in the press.
Even more regrettably, it’s becoming increasingly clear that cutting-edge, extremely polished AI apps probably aren’t even going to be necessary to fool a lot of voters. Over the last few election cycles, communication experts have found that you can trick a lot of people with so-called “cheapfakes,” which rely on basic editing techniques rather than high-tech solutions like generative AI. One viral video from the 2020 election claimed to locate a hidden wire on Joe Biden, used to feed him information during a debate. Even though the line was just an undoctored crease in Biden’s shirt, the fake video was shared thousands of times.
The Washington Post reports that political campaigns have started reaching out to social media platforms – including Facebook owners Meta – about how they plan to combat the spread of AI-created misinformation. According to the article, Meta responded that it will employ “independent fact-checkers” to examine media and apply warning labels to “dubious content.” This apparently raised concerns among the campaigns, as human fact-checkers can sometimes be slow to react to fast-spreading viral falsehoods, and can’t really deal with content that’s being rapidly shared and duplicated by users.
For their part, the Post has a three-part strategy for members of the public attempting to identify deepfakes: check out the hands, look for gibberish or garbled text, and scan the background for blurry or distorted details. These are of course the well-known glitches and sticking points for generative AI apps; concerningly, we’re seeing constant improvement on these fronts. Midjourney is already capable of producing lifelike hands.
It’s important to note, as well, that the existence of credible “deepfakes” and AI-generated videos also gives politicians a potential out, even when confronted with real evidence of divisive statements or outright wrongdoing. If the infamous behind-the-scenes “Access Hollywood” recording to Donald Trump were released today, for example, rather than in 2016, the former president could simply deny that was his actual voice, opening room for doubt among supporters.
Opportunities for AI in Copywriting, Micro-Targeting, and Polling
Concerns about manipulated audio, images and videos have sucked up most of the oxygen around the political impact of AI, but they’re just one of many ways that the technology will likely play a role in the 2024 presidential race, along with all future US elections. According to a recent piece from Politico, campaigns are very aware of the potential impact of AI technology, but remain in the brainstorming phase about how to employ it for their personal benefit.
Many of the ideas about how to use AI center around copywriting. ChatGPT and similar products may sometimes decline to address specific political issues, due to guardrails installed by the creators to avoid potentially controversial or even upsetting responses. But they can still be used to outline and workshop campaign emails to get a sense for how various approaches and phrases could play for an audience. According to The New York Times, the Democratic Party has already started testing out the use of AI apps in composing fundraising emails, and has apparently found that – on occasion – the apps come up with pitches that worked more effectively than their human-composed counterparts.
The same kinds of Large Language Models (LLMs) that power apps like ChatGPT could be used for what’s known in the political world as “micro-targeting.” In general, this just refers to creating political ads and messaging that’s likely to have a lot of appeal and impact for a narrow, niche audience. With AI apps’ ability to scan and process so much data so quickly, theoretically, it’s possible they could micro-target political advertising on an incredibly narrow scale, potentially even customizing ads in some small ways for each individual viewer based on their pre-existing biases and preferences.
Similarly, heavily customizable and granular political polling presents another oppotunity for AI to make its presence known. Earlier this month, a team composed of both political and computer scientists from Brigham Young University used ChatGPT-3 to mimic a political survey, tweaking responses based on demographics like race, age, ideology, and religion. When comparing their results to actual poll results from the 2012, 2016, and 2020 US presidential campaigns, they found a high correspondence between the AI responses and real voters. AI “focus groups” could thus become a way to test out all kinds of potential strategies, slogans, speeches, and approaches, allowing campaigns to tweak and fine-tune their messaging before it’s ever even presented to an actual human audience.
So is AI a Real Threat to the 2024 Election?
Not everyone is convinced that the end is nigh and these potential AI threats are real and bonafide concerns. This week, British journalist Gavin Haynes argued that journalists – not AI apps – present the gravest challenge to a free and fair 2024 presidential race. Haynes notes that ideas like “AI focus groups,” while they might have some utility, are necessarily tied to the past. The application itself was trained on what people previously said about their political opinions and ideas, not how they feel today, or how they will feel next week, presenting a natural barrier to their utility in fast-moving political campaigns. He also points out that, so far, conventional reporting has been pretty good at pushing back against fraudulent deepfakes. Even that relatively believable image of the Pope in a puffer jacket was debunked almost immediately, and it wasn’t particularly shocking.
As we’ve seen in the last several election cycles, misinformation doesn’t require artificial intelligence to help it spread. Still, Haynes’ certainty that responsible journalism can adequately push back against whatever AI apps can throw at your feed feels a bit premature. We’ve yet to see what tactics the candidates’ campaigns are going to come up with for these tools, let alone what lone wolf bad actors around the web are going to do independently once the election really starts to heat up.
- Is AI Art Genuine Creativity or a Gimmick To Go Viral? ›
- AI Chatbots Aren’t 'Alive.' Why Is the Media Trying to Convince Us Otherwise? ›
- Would Biden's Proposed AI 'Bill of Rights' Be Effective—Or Just More Virtue Signaling? ›
- Generative AI Apps Still Have A Long Way to Go Before They Start Swaying Elections ›
- LA Tech Week: AI's Role in Advertising and Marketing - dot.LA ›