The RNC Created a Negative AI-Generated Ad in Response to Biden’s Reelection Bid

Lon Harris
Lon Harris is a contributor to dot.LA. His work has also appeared on ScreenJunkies, RottenTomatoes and Inside Streaming.
The RNC Created a Negative AI-Generated Ad in Response to Biden’s Reelection Bid
Evan Xie

Earlier this week, President Joe Biden announced his intention to run for re-election in 2024, and the Republican National Committee (RNC) has responded with a negative ad, imagining the dystopian future awaiting us should he secure a second term. The video is made up entirely of images created by generative AI software, appearing to show President Biden and VP Harris’ victory celebration, followed by nightmarish images of the immediate consequences for Americans. (These include a flood of new immigrants arriving at the southern border, various international conflicts, and a new financial crisis.) It’s the RNC’s first entirely AI-generated ad in history, and one of the first major political ads created by a generative AI app in U.S. history.

While some of the ad’s images look fairly lifelike, there remains an uncanny surreality to many shots. The creators of the ad have employed this purposefully, suggesting not verifiable reality but an imagined, “dark” future if the Republican candidate utlimately loses the election. The current state of generative AI is rather ideal for designing a dystopian near-future, rather than an entirely credible and compelling vision of our own world.

Still, the existence of an AI-generated political ad – a year and a half before Americans actually go to the polls – serves as something of a canary in the coal mine moment. Whether or not AI apps will impact politics is no longer a purely theoretical question: the technology is here right now, and it’s already making a difference.

The Growing Concerns of AI-Created Misinformation in Elections

Not surprisingly, the ability of generative AI apps to create credible fake images, audio clips, or even videos has received most of the attention. Putting words in a candidate’s mouth or depict them in a compromising scenario obviously has a lot of psychic power. If AI fakes were believable enough, and spread far enough before being discredited, they could theoretically sway an election entirely on their own.

Many of these concerns pre-date the 2023 explosion of interest in AI apps. In 2020, and even as far back as 2016, voters were warned about the potential of social media platforms and “deepfakes” to spread misinformation about candidates and their positions on the issues. The technology has only become more sophisticated over time; nothing like the new RNC ad was remotely possible when the first warnings about deepfakes appeared in the press.

Even more regrettably, it’s becoming increasingly clear that cutting-edge, extremely polished AI apps probably aren’t even going to be necessary to fool a lot of voters. Over the last few election cycles, communication experts have found that you can trick a lot of people with so-called “cheapfakes,” which rely on basic editing techniques rather than high-tech solutions like generative AI. One viral video from the 2020 election claimed to locate a hidden wire on Joe Biden, used to feed him information during a debate. Even though the line was just an undoctored crease in Biden’s shirt, the fake video was shared thousands of times.

The Washington Post reports that political campaigns have started reaching out to social media platforms – including Facebook owners Meta – about how they plan to combat the spread of AI-created misinformation. According to the article, Meta responded that it will employ “independent fact-checkers” to examine media and apply warning labels to “dubious content.” This apparently raised concerns among the campaigns, as human fact-checkers can sometimes be slow to react to fast-spreading viral falsehoods, and can’t really deal with content that’s being rapidly shared and duplicated by users.

For their part, the Post has a three-part strategy for members of the public attempting to identify deepfakes: check out the hands, look for gibberish or garbled text, and scan the background for blurry or distorted details. These are of course the well-known glitches and sticking points for generative AI apps; concerningly, we’re seeing constant improvement on these fronts. Midjourney is already capable of producing lifelike hands.

It’s important to note, as well, that the existence of credible “deepfakes” and AI-generated videos also gives politicians a potential out, even when confronted with real evidence of divisive statements or outright wrongdoing. If the infamous behind-the-scenes “Access Hollywood” recording to Donald Trump were released today, for example, rather than in 2016, the former president could simply deny that was his actual voice, opening room for doubt among supporters.

Opportunities for AI in Copywriting, Micro-Targeting, and Polling

Concerns about manipulated audio, images and videos have sucked up most of the oxygen around the political impact of AI, but they’re just one of many ways that the technology will likely play a role in the 2024 presidential race, along with all future US elections. According to a recent piece from Politico, campaigns are very aware of the potential impact of AI technology, but remain in the brainstorming phase about how to employ it for their personal benefit.

Many of the ideas about how to use AI center around copywriting. ChatGPT and similar products may sometimes decline to address specific political issues, due to guardrails installed by the creators to avoid potentially controversial or even upsetting responses. But they can still be used to outline and workshop campaign emails to get a sense for how various approaches and phrases could play for an audience. According to The New York Times, the Democratic Party has already started testing out the use of AI apps in composing fundraising emails, and has apparently found that – on occasion – the apps come up with pitches that worked more effectively than their human-composed counterparts.

The same kinds of Large Language Models (LLMs) that power apps like ChatGPT could be used for what’s known in the political world as “micro-targeting.” In general, this just refers to creating political ads and messaging that’s likely to have a lot of appeal and impact for a narrow, niche audience. With AI apps’ ability to scan and process so much data so quickly, theoretically, it’s possible they could micro-target political advertising on an incredibly narrow scale, potentially even customizing ads in some small ways for each individual viewer based on their pre-existing biases and preferences.

Similarly, heavily customizable and granular political polling presents another oppotunity for AI to make its presence known. Earlier this month, a team composed of both political and computer scientists from Brigham Young University used ChatGPT-3 to mimic a political survey, tweaking responses based on demographics like race, age, ideology, and religion. When comparing their results to actual poll results from the 2012, 2016, and 2020 US presidential campaigns, they found a high correspondence between the AI responses and real voters. AI “focus groups” could thus become a way to test out all kinds of potential strategies, slogans, speeches, and approaches, allowing campaigns to tweak and fine-tune their messaging before it’s ever even presented to an actual human audience.

So is AI a Real Threat to the 2024 Election?

Not everyone is convinced that the end is nigh and these potential AI threats are real and bonafide concerns. This week, British journalist Gavin Haynes argued that journalists – not AI apps – present the gravest challenge to a free and fair 2024 presidential race. Haynes notes that ideas like “AI focus groups,” while they might have some utility, are necessarily tied to the past. The application itself was trained on what people previously said about their political opinions and ideas, not how they feel today, or how they will feel next week, presenting a natural barrier to their utility in fast-moving political campaigns. He also points out that, so far, conventional reporting has been pretty good at pushing back against fraudulent deepfakes. Even that relatively believable image of the Pope in a puffer jacket was debunked almost immediately, and it wasn’t particularly shocking.

As we’ve seen in the last several election cycles, misinformation doesn’t require artificial intelligence to help it spread. Still, Haynes’ certainty that responsible journalism can adequately push back against whatever AI apps can throw at your feed feels a bit premature. We’ve yet to see what tactics the candidates’ campaigns are going to come up with for these tools, let alone what lone wolf bad actors around the web are going to do independently once the election really starts to heat up.

Subscribe to our newsletter to catch every headline.

Creandum’s Carl Fritjofsson on the Differences Between the Startup Ecosystem in Europe and the U.S.

Decerry Donato

Decerry Donato is a reporter at dot.LA. Prior to that, she was an editorial fellow at the company. Decerry received her bachelor's degree in literary journalism from the University of California, Irvine. She continues to write stories to inform the community about issues or events that take place in the L.A. area. On the weekends, she can be found hiking in the Angeles National forest or sifting through racks at your local thrift store.

Carl Fritjofsson
Carl Fritjofsson

On this episode of the LA Venture podcast, Creandum General Partner Carl Fritjofsson talks about his venture journey, why Generative-AI represents an opportunity to rethink products from the ground up, and why Q4 2023 and Q1 2024 could be "pretty bloody" for startups.

Read moreShow less

AI Is Undergoing Some Growing Pains at a Pivotal Moment in Its Development

Lon Harris
Lon Harris is a contributor to dot.LA. His work has also appeared on ScreenJunkies, RottenTomatoes and Inside Streaming.
AI Is Undergoing Some Growing Pains at a Pivotal Moment in Its Development
Evan Xie

One way to measure just how white-hot AI development has become: the world is running out of the advanced graphics chips necessary to power AI programs. While Intel central processing units were once the most sought-after industry leaders, advanced graphics chips like Nvidia’s are designed to run multiple computations simultaneously, a baseline necessity for many AI models.

An early version of ChatGPT required around 10,000 graphics chips to run. By some estimates, newer updates require 3-5 times that amount of processing power. As a result of this skyrocketing demand, shares of Nvidia have jumped 165% so far this year.

Building on this momentum, this week, Nvidia revealed a line-up of new AI-related projects including an Israeli supercomputer project and a platform utilizing AI to help video game developers. For smaller companies and startups, however, getting access to the vital underlying technology that powers AI development is already becoming less about meritocracy and more about “who you know.” According to the Wall Street Journal, Elon Musk scooped up a valuable share of server space from Oracle this year before anyone else got a crack at it for his new OpenAI rival, X.AI.

The massive demand for Nvidia-style chips has also created a lucrative secondary market, where smaller companies and startups are often outbid by larger and more established rivals. One startup founder compares the fevered crush of the current chip marketplace to toilet paper in the early days of the pandemic. For those companies that don’t get access to the most powerful chips or enough server space in the cloud, often the only remaining option is to simplify their AI models, so they can run more efficiently.

Beyond just the design of new AI products, we’re also at a key moment for users and consumers, who are still figuring out what sorts of applications are ideal for AI and which ones are less effective, or potentially even unethical or dangerous. There’s now mounting evidence that the hype around some of these AI tools is reaching a lot further than the warnings about its drawbacks.

JP Morgan Chase is training a new AI chatbot to help customers choose financial securities and stocks, known as IndexGPT. For now, they insist that it’s purely supplemental, designed to advise and not replace money managers, but it may just be a matter of time before job losses begin to hit financial planners along with everyone else.

A lawyer in New York just this week was busted by a judge for using ChatGPT as part of his background research. When questioned by the judge, lawyer Peter LoDuco revealed that he’d farmed out some research to a colleague, Steven A. Schwartz, who had consulted with ChatGPT on the case. Schwartz was apparently unaware that the AI chatbot was able to lie – transcripts even show him questioning ChatGPT’s responses and the bot assuring him that these were, in fact, real cases and citations.

New research by Marucie Jakesch, a doctoral student from Cornell University, suggests that even users who are more aware than Schwartz about how AI works and its limitations may still be impacted in subtle and subconscious ways by its output.

Not to mention, according to data from, high school and college students already – on the whole – prefer utilizing ChatGPT for help with schoolwork over a human tutor. The survey also notes that advanced students tend to report getting more out of using ChatGPT-type programs than beginners, likely because they have more baseline knowledge and can construct better and more informative prompts.

But therein lies the big drawback to using ChatGPT and other AI tools for education. At least so far, they’re reliant on the end user writing good prompts and having some sense about how to organize a lesson plan for themselves. Human tutors, on the other hand, have a lot of personal experience in these kinds of areas. Someone who instructs others in foreign languages professionally probably has a good inherent sense of when you need to focus on expanding your vocabulary vs. drilling certain kinds of verb and tense conjugations. They’ve helped many other students prepare for tests, quizzes, and real-world challenges, while computer software can only guess at what kinds of scenarios its proteges will face.

A recent Forbes editorial by academic Thomas Davenport suggests that, while AI is getting all the hype right now, other forms of computing or machine learning are still going to be more effective for a lot of basic tasks. From a marketing perspective in 2023, it’s helpful for a tech company to throw the “AI” brand around, but it’s not magically going to be the answer for every problem.

Davenport points to a similar (if smaller) whirlwind of excitement around IBM’s “Watson” in the early 2010s, when it was famously able to take out human “Jeopardy!’ champions. It turns out, Watson was a general knowledge engine, really best suited for jobs like playing “Jeopardy.” But after the software gained celebrity status, people tried to use it for all sorts of advanced applications, like designing cancer drugs or providing investment advice. Today, few people turn to Watson for these kinds of solutions. It’s just the wrong tool for the job. In that same way, Davenport suggests that generative AI is in danger of being misapplied.

While the industry and end users both race to solve the AI puzzle in real time, governments are also feeling pressure to step in and potentially regulate the AI industry. This is much easier said than done, though, as politicians face the same kinds of questions and uncertainty as everyone else.

OpenAI CEO Sam Altman has been calling for governments to begin regulating AI, but just this week, he suggested that the company might pull out of the European Union entirely if the regulations were too onerous. Specifically, Altman worries that attempts to narrow what kinds of data can be used to train AI systems – specifically blocking copyrighted material – might well prove impossible. “If we can comply, we will, and if we can’t, we’ll cease operating,” Altman told Time. “We will try, but there are technical limits to what’s possible.” (Altman has already started walking this threat back, suggesting he has no immediate plans to exit the EU.)

In the US, The White House has been working on a “Blueprint for an AI Bill of Rights,” but it’s non-binding, just a collection of largely vague suggestions. It’s one thing to agree “consumers shouldn’t face discrimination from an algorithm” and “everyone should be protected from abusive data practices and have agency over how their data is used.” But enforcement is an entirely different animal. A lot of these issues already exist in tech, and are much larger than AI, and the US government already doesn’t do much about them.

Additionally, it’s possible AI regulations won’t work well at all if they aren’t global. Even if you set some policies and get an entire nation’s government to agree, how to set similar worldwide protocols? What if US and Europe agree but India doesn’t? Everyone around the world accesses roughly the same internet, so without any kind of international standard, it’s going to be much harder for individual nations to enforce specific rules. As with so many other AI developments, there’s inherent danger in patchwork regulations; it could allow some companies, or regions, or players to move forward while others are unfairly or ineffectively stymied or held back.

The same kinds of socio-economic concerns around AI that we have nationally – some sectors of the work force left behind, the wealthiest and most established players coming in to the new market with massive advantages, the rapid spread of misinformation – are all, in actuality, global concerns. Just as the hegemony of Microsoft and Google threaten the ability of new players to enter the AI space, the West’s early dominance of AI tech threatens to push out companies and innovations from emerging markets like Southeast Asia, Subsaharan Africa, and Central America. Left unfettered, AI could potentially deepen social, economic, and digital divisions both within and between all of these societies.

Undaunted, some governments aren’t waiting around for these tools to develop any further before they start attempting to regulate them. New York City has already set up some rules about how AI can be used during the hiring process while will take effect in July. The law requires any company using AI software in hiring to notify candidates that it’s being used, and to have independent auditors check the system annually for bias.

This sort of piecemeal figure-it-out-as-we-go approach is probably what’s going to be necessary, at least short-term, as AI development shows zero signs of slowing down or stopping any time soon. Though there’s some disagreement among experts, most analysts agree with Wharton professor and economist Jeremy Siegel, who told CNBC this week that AI is not yet a bubble. He pointed to the Nvidia earnings as a sign the market remains healthy and not overly frothy. So, at least for now, the feverish excitement around AI is not going to burst like a late ‘90s startup stock. The world needs to prepare as if this technology is going to be with us for a while.

What the Future of Rivian Looks Like According to CEO RJ Scaringe

David Shultz

David Shultz reports on clean technology and electric vehicles, among other industries, for dot.LA. His writing has appeared in The Atlantic, Outside, Nautilus and many other publications.

What the Future of Rivian Looks Like According to CEO RJ Scaringe

Rivian CEO RJ Scaringe took to Instagram last weekend to answer questions from the public about his company and its future. Topics covered included new colors, sustainability, production ramp, new products and features. Speaking of which, viewers also got a first look at the company’s much-anticipated R2 platform, albeit made of clay and covered by a sheet, but hey, that’s…something. If you don’t want to watch the whole 33 minute video, which is now also on Youtube, we’ve got the highlights for you.

Read moreShow less