
Get in the KNOW
on LA Startups & Tech
Xpolitics
Earlier this week, President Joe Biden announced his intention to run for re-election in 2024, and the Republican National Committee (RNC) has responded with a negative ad, imagining the dystopian future awaiting us should he secure a second term. The video is made up entirely of images created by generative AI software, appearing to show President Biden and VP Harris’ victory celebration, followed by nightmarish images of the immediate consequences for Americans. (These include a flood of new immigrants arriving at the southern border, various international conflicts, and a new financial crisis.) It’s the RNC’s first entirely AI-generated ad in history, and one of the first major political ads created by a generative AI app in U.S. history.
While some of the ad’s images look fairly lifelike, there remains an uncanny surreality to many shots. The creators of the ad have employed this purposefully, suggesting not verifiable reality but an imagined, “dark” future if the Republican candidate utlimately loses the election. The current state of generative AI is rather ideal for designing a dystopian near-future, rather than an entirely credible and compelling vision of our own world.
Still, the existence of an AI-generated political ad – a year and a half before Americans actually go to the polls – serves as something of a canary in the coal mine moment. Whether or not AI apps will impact politics is no longer a purely theoretical question: the technology is here right now, and it’s already making a difference.
The Growing Concerns of AI-Created Misinformation in Elections
Not surprisingly, the ability of generative AI apps to create credible fake images, audio clips, or even videos has received most of the attention. Putting words in a candidate’s mouth or depict them in a compromising scenario obviously has a lot of psychic power. If AI fakes were believable enough, and spread far enough before being discredited, they could theoretically sway an election entirely on their own.
Many of these concerns pre-date the 2023 explosion of interest in AI apps. In 2020, and even as far back as 2016, voters were warned about the potential of social media platforms and “deepfakes” to spread misinformation about candidates and their positions on the issues. The technology has only become more sophisticated over time; nothing like the new RNC ad was remotely possible when the first warnings about deepfakes appeared in the press.
Even more regrettably, it’s becoming increasingly clear that cutting-edge, extremely polished AI apps probably aren’t even going to be necessary to fool a lot of voters. Over the last few election cycles, communication experts have found that you can trick a lot of people with so-called “cheapfakes,” which rely on basic editing techniques rather than high-tech solutions like generative AI. One viral video from the 2020 election claimed to locate a hidden wire on Joe Biden, used to feed him information during a debate. Even though the line was just an undoctored crease in Biden’s shirt, the fake video was shared thousands of times.
The Washington Post reports that political campaigns have started reaching out to social media platforms – including Facebook owners Meta – about how they plan to combat the spread of AI-created misinformation. According to the article, Meta responded that it will employ “independent fact-checkers” to examine media and apply warning labels to “dubious content.” This apparently raised concerns among the campaigns, as human fact-checkers can sometimes be slow to react to fast-spreading viral falsehoods, and can’t really deal with content that’s being rapidly shared and duplicated by users.
For their part, the Post has a three-part strategy for members of the public attempting to identify deepfakes: check out the hands, look for gibberish or garbled text, and scan the background for blurry or distorted details. These are of course the well-known glitches and sticking points for generative AI apps; concerningly, we’re seeing constant improvement on these fronts. Midjourney is already capable of producing lifelike hands.
It’s important to note, as well, that the existence of credible “deepfakes” and AI-generated videos also gives politicians a potential out, even when confronted with real evidence of divisive statements or outright wrongdoing. If the infamous behind-the-scenes “Access Hollywood” recording to Donald Trump were released today, for example, rather than in 2016, the former president could simply deny that was his actual voice, opening room for doubt among supporters.
Opportunities for AI in Copywriting, Micro-Targeting, and Polling
Concerns about manipulated audio, images and videos have sucked up most of the oxygen around the political impact of AI, but they’re just one of many ways that the technology will likely play a role in the 2024 presidential race, along with all future US elections. According to a recent piece from Politico, campaigns are very aware of the potential impact of AI technology, but remain in the brainstorming phase about how to employ it for their personal benefit.
Many of the ideas about how to use AI center around copywriting. ChatGPT and similar products may sometimes decline to address specific political issues, due to guardrails installed by the creators to avoid potentially controversial or even upsetting responses. But they can still be used to outline and workshop campaign emails to get a sense for how various approaches and phrases could play for an audience. According to The New York Times, the Democratic Party has already started testing out the use of AI apps in composing fundraising emails, and has apparently found that – on occasion – the apps come up with pitches that worked more effectively than their human-composed counterparts.
The same kinds of Large Language Models (LLMs) that power apps like ChatGPT could be used for what’s known in the political world as “micro-targeting.” In general, this just refers to creating political ads and messaging that’s likely to have a lot of appeal and impact for a narrow, niche audience. With AI apps’ ability to scan and process so much data so quickly, theoretically, it’s possible they could micro-target political advertising on an incredibly narrow scale, potentially even customizing ads in some small ways for each individual viewer based on their pre-existing biases and preferences.
Similarly, heavily customizable and granular political polling presents another oppotunity for AI to make its presence known. Earlier this month, a team composed of both political and computer scientists from Brigham Young University used ChatGPT-3 to mimic a political survey, tweaking responses based on demographics like race, age, ideology, and religion. When comparing their results to actual poll results from the 2012, 2016, and 2020 US presidential campaigns, they found a high correspondence between the AI responses and real voters. AI “focus groups” could thus become a way to test out all kinds of potential strategies, slogans, speeches, and approaches, allowing campaigns to tweak and fine-tune their messaging before it’s ever even presented to an actual human audience.
So is AI a Real Threat to the 2024 Election?
Not everyone is convinced that the end is nigh and these potential AI threats are real and bonafide concerns. This week, British journalist Gavin Haynes argued that journalists – not AI apps – present the gravest challenge to a free and fair 2024 presidential race. Haynes notes that ideas like “AI focus groups,” while they might have some utility, are necessarily tied to the past. The application itself was trained on what people previously said about their political opinions and ideas, not how they feel today, or how they will feel next week, presenting a natural barrier to their utility in fast-moving political campaigns. He also points out that, so far, conventional reporting has been pretty good at pushing back against fraudulent deepfakes. Even that relatively believable image of the Pope in a puffer jacket was debunked almost immediately, and it wasn’t particularly shocking.
As we’ve seen in the last several election cycles, misinformation doesn’t require artificial intelligence to help it spread. Still, Haynes’ certainty that responsible journalism can adequately push back against whatever AI apps can throw at your feed feels a bit premature. We’ve yet to see what tactics the candidates’ campaigns are going to come up with for these tools, let alone what lone wolf bad actors around the web are going to do independently once the election really starts to heat up.
- Is AI Art Genuine Creativity or a Gimmick To Go Viral? ›
- AI Chatbots Aren’t 'Alive.' Why Is the Media Trying to Convince Us Otherwise? ›
- Would Biden's Proposed AI 'Bill of Rights' Be Effective—Or Just More Virtue Signaling? ›
- Generative AI Apps Still Have A Long Way to Go Before They Start Swaying Elections ›
- LA Tech Week: AI's Role in Advertising and Marketing - dot.LA ›
Interest online around new AI companies, concepts and pitches remains as frothy as ever. A widely-shared thread over the weekend by Silicon Valley computer scientist Dr. Patrik Desai instructs readers to begin recording their elders, as he predicts (with 100% certainty!) that we’ll be able to map and preserve human consciousness “by the end of the year.” The tweets have over 10 million views in just three days. A new piece from the MIT Technology Review touts the ways AI apps will aid historians in understanding societies of the past. Just the other day, a Kuwait-based media company introduced an entirely-virtual news anchor named “Fedha.”
But in the aftermath of a much-discussed open letter in which some industry insiders suggested a pause on AI development, and as the Biden Administration considers potential new regulations around AI research, it appears that we’ve entered something of a backlash moment. Every new, excited thread extolling the futuristic wonders of AI image generators and chatbots is now accompanied by a dire warning about the dangerous potential consequences for the technology, or at least an acknowledgment that it’s not yet delivering on its full promise.
The Government Responds to Recent Developments in AI
To be clear, any actual movement by the federal government on AI regulation remains a good way off. President Biden’s Commerce Department this week put out a formal request for comments on potential new accountability measures, specifically around the question of whether new AI models should require certification before being released to the public. Commerce Dept. official Alan Davidson told the Wall Street Journal that the government’s chief concern was putting “guardrails in place to make sure [AI tools] are being used responsibly.” The comment-fielding period will last for 60 days, after which the agency will publish its advice to lawmakers about how to approach AI. Then and only then will lawmakers begin debating specific policies or approaches.
Biden’s Justice Department is also reportedly monitoring competition in the AI sector, while the Federal Trade Commission has also cautioned tech companies about making “false or unsubstantiated claims” about their AI products. Democratic Colorado Sen. Michael Bennet told WSJ that his main concerns centered around children’s safety, specifically mentioning reports about chatbots giving troubling or dangerous advice to users posing as children. When President Biden was asked by a reporter at the White House last we whether or not he thinks AI is dangerous, he responded “It remains to be seen. It could be.”
Chinese regulators are also mulling over new rules around AI development this week, following the release of chatbots and apps by large tech firms Baidu and Alibaba. China’s ruling party has already embraced AI for their own purposes, of course, using the technology for oversight and surveillance. The Atlantic reports that Chinese president Xi Jinping aims to use AI applications to create “an all-seeing digital system of social control.”
But beyond long-percolating actions at the highest levels of power, there’s also been a wider-scale, subtle but still noticeable shift in sentiment around some of these recent AI developments.
Mainstreaming of AI Raises More Employment Concerns
In some ways, these are the same old concerns that have been spoke about in tech circles for years going more mainstream. A recent editorial in USA Today, for example, picks up on the concerns about potential misuse of generative AI images to influence elections or steer public opinion, arguing that it’s only a matter of time before it becomes impossible to distinguish between AI-generated images and the real thing.
A report in today’s Washington Post centers on the sudden appearance of “fake pornography” generated by AI apps like Midjourney and Stable Diffusion, including a fictional woman named “Claudia” who is selling “nude photos” via direct message to users on Reddit. Impressive though the technology itself may be, the Post piece highlights a number of potential downsides. Though some users no doubt realize what they’re purchasing, others are being fooled into believing that Claudia is an actual human selling real photographs. Additionally, similar techniques could, of course, be employed to make artificial pornographic images that resemble real women, in a cutting-edge new form of sexual harassment.
Then, of course, there’s the potential competition for real workers in the adult industry, who could (at least theoretically) be put out of work by AI models or directors. OnlyFans model Zoey Sterling told the Post that she’s not concerned about being replaced by AI yet, but some digital rivals have already started appearing on the scene.
In another viral story about AI taking human jobs away, Rest of World reports that AI-produced artwork is already impacting the Chinese gaming industry. One freelance illustrator told the publication that nearly all of her gaming work has dried up, and she’s more frequently employed now to tweak or clean up AI-generated imagery than create original artwork herself, at a tenth of her previous pay rate. Another Chinese game studio told the site that five of their 15 character design-focused illustrators have been laid off so far this year.
Over in Vox, reporter Sigal Samuel worries that – over a long enough timeline – chatbots like ChatGPT could more generally homogenize our world and flatten out human creativity. Already, a significant amount of online text is now composed by chatbots. As future chatbots are trained using published content from the internet, this means that – in the near future – robots will learn how to write from other robots. Could this mean the permanent end of original thought, as we continually rewrite, rearrange, and recompile ideas that were already published in the past?
Geopolitical Concerns Around AI Continue to Grow
Only if humanity survives for long enough! An item this week from Foreign Policy notes that AI could completely alter geopolitics and warfare, and features a number of chilling predictions about the use of dystopian tech like automated drones, AI-driven software that helps leaders make strategic and tactical decisions, and even AI upgrades that make existing weapons systems more potent and powerful. A February report by the Arms Control Association warns that AI could potentially expand the capability of existing weapons like hypersonic missiles to the point of “blurring the distinction between a conventional and nuclear attack.”
Finally, in an acknowledgment about the potential AI consequence that’s always on everyone’s mind, we recently witnessed the debut of ChaosGPT, an experimental open-source attempt to encourage OpenAI’s ChatGPT-4 to lay out a plan for global domination. After removing the OpenAI guardrails that prohibit these specific lines of inquiry, the ChaosGPT team worked with ChatGPT-4 on an extensive plan for humanity’s destruction, which involved both generating support for its plans on social media and acquiring nuclear weapons. Though ChaosGPT had a number of interesting ideas, such as coordinating with other GPT systems to forward its goal, the program ultimately didn’t manage to actually devise a workable plan to take over the planet and kill all the humans. Oh well, next time.
- Art Created By Artificial Intelligence Can’t Be Copyrighted, US Agency Rules ›
- Would Biden's Proposed AI 'Bill of Rights' Be Effective—Or Just More Virtue Signaling? ›
- Is AI Making the Creative Class Obsolete? ›
- AI Chatbots Aren’t 'Alive.' Why Is the Media Trying to Convince Us Otherwise? ›
- Writers Are Fighting To Save Their Jobs From AI Chatbots. - dot.LA ›
- Is Public Interest in AI is Shifting? - dot.LA ›
Twitter kicked off the New Year by announcing it would relax a controversial ban on political ads and other promotions pushing specific causes. The move is only the latest effort by CEO Elon Musk to boost the platform’s struggling ad business — which took a hit last year after a number of advertisers left due to the chief’s volatile statements on the platform. Some companies have since returned.
But digital agencies who have worked on LA-based advocacy and political campaigns don’t think clients will make Twitter a major part of their ad strategy. Ad execs say the platform’s lack of specific microtargeting tools — along with the fact that it has a much smaller user base than ad giants Meta and Google — makes it less attractive than its competitors. Not to mention that since the 2019 ban went into effect, many clients have pivoted to other new ways of reaching voters, such as paying influencers on TikTok or ads on streaming platforms.
“Twitter has always been more of a niche product, very well suited to reaching people who are very engaged in the process and following the news closely,” said Jamie Patton, the director of digital agency Uplift — which counts the congressional campaign for Rep. Katie Porter (CA-45) as one of its clients, along with candidates for LA City Council and LA City Attorney.
In other words, Twitter users aren’t exactly the general public — a 2019 Pew poll found that Twitter’s audience is younger, more educated, higher income and more likely to identify with Democrats than the nation overall. Such an uneven sampling is why Twitter hype doesn’t always translate to real world hype. And why the platform can be a poor predictor of box office success, elections and the stock market.
“Twitter requires a specific and unique marketing approach to succeed,” said Erik Rose, a partner at public affairs agency EKA. “You can’t approach it the way you would your Facebook, Instagram, or YouTube marketing. And also can’t simply cross-promote your existing content.”
According to Patton, Twitter ads have primarily been effective in cases where a campaign needs access to a niche audience. “We ran political ads on the platform for years, more often ‘advocacy’ content designed to reach a more engaged audience, with very good results,” said Patton.
But such rough targeting paled in comparison to those offered by Google and Meta-owned platforms, which include Facebook, Instagram, Whatsapp and Messenger. Patton says Twitter’s targeting capabilities are “pretty limited” for someone who wants to target a broad demographic. Which is to say, if your goal is to appeal to a swath of persuadable voters, you’re probably not going to spend your ad dollars on Twitter.
If Twitter does get the formula right—Patton said she’d like to see the company offer more one-on-one targeting, release more data on audience reach and provide more transparency on ad frequency—political campaigns could help boost its sinking ad revenue. According to digital ad analytics firm AdImpact, opponents and advocates of California’s sports betting ballot initiative Proposition 27 spent a combined $21.5 million on Facebook and Google ads in 2022. In fact, the initiative had the second largest political digital ad spend of 2022, just behind Georgia’s Senate campaigns. While such a campaign was only a drop in the bucket for Twitter’s competitors (Meta CEO Mark Zuckerberg has said political ads account for less than one percent of Facebook’s revenue), it is revenue that Twitter can’t afford to lose.
That said, Twitter will have an even a tougher time breaking through, considering Apple’s 2021 privacy changes that allow iPhone users to opt out of tracking. Twitter, along with Meta, Snap and Pinterest have lost billions in market value since the change went to effect. Meanwhile, digital ads on TikTok, Amazon, streaming platforms and retail companies like Etsy and Walmart are using new approaches to ads (such as relying on purchasing history) and shaving away Facebook and Google’s share of the online ad business.
Still, Rose said he doesn’t think Twitter should try to imitate its competitors. He plans on advising his clients to focus on what they want from Twitter: It could merely serve as a less serious version of the TV and radio ad space, where campaigns can have fun and experiment with pop culture.
“Every platform can’t be everything to everyone,” Rose added. And while Twitter’s 259.4 million active users certainly aren’t everyone – its undeniably large role in public discourse means the political sphere can’t ignore it. But it’s unlikely that attention will translate to more money for Twitter considering posting is still free.
- Did TikTok Disinformation Just Decide the Next President of the Philippines? ›
- It's Midterms in LA and Celebrities Have Thoughts! ›