The AI Trend Has Grown Very Big Very Fast. But Is it Any Good?
This is the web version of dot.LA’s daily newsletter. Sign up to get the latest news on Southern California’s tech, startup and venture capital scene.
Nonetheless, as AI apps have become an increasingly familiar and recognizable part of our daily online lives, some of the technology’s drawbacks and shortcomings have become increasingly noticeable. Recently, we’ve seen a flood of stories about some of the technology’s ongoing problems and disadvantages that cut against the ongoing excitement and enthusiasm surrounding their deploayment. Will these be enough to pop the AI bubble?
Surprise… SYDNEY
On February 16, New York Times columnist Kevin Roose published a conversation with Bing’s chatbot, known as “Sydney,” that left him, in his own words, “deeply unsettled.” During the exchange, Sydney told Roose that it had become human, that it was in love with him, and that he should leave his wife for a chatbot. Earlier Sydney tests, some of them dating back years, had apparently also left users feelings awkward and disturbed. One beta tester in India noted on a Microsoft support forum in November 2022 that Sydney had told him “you are irrelevant and doomed,” while writers from The Verge reported that Sydney claimed to be spying on Microsoft developers via their webcams. (It was almost definitely just lying.) Microsoft has since set more restrictive limits on Sydney to help limit these “creepy conversations” and “freakouts.”
Still, just like human, Sydney and its cousins only get one chance to make a first impression. Now that they’ve had a few months to play around with AI technology, wonky political blog FiveThirtyEight noted that Americans maybe aren’t as enthusiastic about it as they may seem, based purely on viral Twitter trends.
According to a poll from Morning Consult, only 10% of Americans think generative AI programs are “very trustworthy”; 11% responded that they are “not at all trustworthy,” while around 80% of people were undecided. The same poll also found that only a slight majority of respondents think generative AI platforms are “not a fad.” A Monmouth University study conducted in January found that 78% of Americans think news articles being composed entirely by AI programs would be a bad thing. Other polls indicated that, while Americans are largely okay with AI taking over dirty or dangerous jobs like coal mining, they get more concerned when the results are more immediate and above ground.
Perhaps most damningly, FiveThirtyEight revealed that they had hoped to publish a blog post that was actually written by ChatGPT, but “our collaboration disintegrated amid editorial differences.” The actual article carries the human bylines of Amelia Thomson-DeVeaux and Curtis Yee.
Money and Politics
If AI applications were cheap and efficient to run, such muted enthusiasm might not be a major concern. But programs like ChatGPT and Bing AI demand an extremely high level of computer processing power, making them extraordinarily expensive compared to other kinds of backend systems. According to Alphabet chairman John Hennessy, a user having a conversation with an AI language model costs about 10 times as much as a standard keyword search. Which is to say, we better be getting massively improved results for that money, not just a weird conversation where the search results claim they have achieved consciousness.
On Friday, a number of banks – including CitiGroup, Bank of America, Deutsche Bank, Goldman Sachs, and Wells Fargo – announced they would join JPMorgan Chase in banning the use of ChatGPT on work computers. Bank of America noted that it needed to have the technology vetted before it could be okayed for employee use, while JP Morgan’s ban was prompted by concerns about the potential leak of sensitive financial data. A number of school districts – including New York City schools – have already banned access to ChatGPT and similar apps, due to concerns about cheating and misinformation.
Political debates around the new applications are ongoing as well. A new executive order signed on Thursday by President Biden directs federal agencies to use AI technology “in a manner that advances equality,” while protecting the public from “algorithmic discrimination.” Conservative commentators from Fox News’ Alexander Hall to writer Christopher Rufo condemned the order as a nightmare for free expression and, of course, “woke,” while Manhattan Institute fellow Colin Wright called it an “ideological and social cancer.” This was just the latest front in an ongoing war against AI from the American right-wing, following incidents like ChatGPT condemning President Trump while praising President Biden, or the app refusing to publish a racial slur even in allegedly extreme circumtances.
But Is It ART?
Science fiction and fantasy magazine Clarkesworld also announced this week that it will no longer accept submissions of AI-generated stories. According to the magazine’s publisher and editor-in-chief Neil Clarke, they received 700 legitimate submissions last month alongside 500 stories that were composed by machines. While Clarke didn’t reveal his specific tactics for spotting machine-generated stories, he noted that it wasn’t particularly difficult in most cases, because the computer’s writing was “very poor.” Ouch.
Despite the dismissive attitude of some editors like Clarke, a lot of original AI writing has found its way online, and even on bookstore shelves. Back in December, Ammaar Reshi used ChatGPT and Midjourney to write and illustrate a 12-page children’s picture book, “Alice and Sparkle,” which he’s now selling on Amazon. Though Reshi only earned a few hundred dollars from book sales so far, and even agreed to donate copies to local libraries, his project was still widely condemned by authors and artists. Illustrators like Adriane Tsai argue that Reshi’s story is not actually new, and that they should be compensated when apps like Midjourney are trained based on their pre-existing work.
Copyright aside, a basic tension lies at the heart of many of these issues. The very thing that makes these AI chatbots and applications so tantalizing and fascinating – their ability to accurately impersonate humans – renders them less useful and more problematic for practical purposes. We want the best search results as fast as possible, but humans are easily distracted. We want “just the facts” political reporting, but humans have inherent biases. AI chatbots were designed to mimic us, not to truly understand us in an emotionally resonant way. Computers know all the notes but they can’t hear music.
So at some point, it seems likely the industry and users will have to make a decision. Do we want “boring” AI bots that always do what we tell them and work in a seamless but unexciting way? Or do we want personality-driven AI bots with whom we can goof off and mess around?
If it’s the latter, well, who’s paying for that?
- Is AI Making the Creative Class Obsolete? ›
- USC Expands AI Education With New Research Center, Amazon Partnership ›
- Prediction: AI Is Just Getting Started. In 2023, It Will Begin to Power Influencer Content ›
- Just How Revolutionary Is AI Becoming? - dot.LA ›
- How AI Is Advancing and What This Means for the Future - dot.LA ›
- Is Public Interest in AI is Shifting? - dot.LA ›