AI Had a Very Bad Week. What Does That Mean for Its Future?

This is the web version of dot.LAâs daily newsletter. Sign up to get the latest news on Southern Californiaâs tech, startup and venture capital scene.
Nonetheless, as AI apps have become an increasingly familiar and recognizable part of our daily online lives, some of the technologyâs drawbacks and shortcomings have become increasingly noticeable. Recently, weâve seen a flood of stories about some of the technologyâs ongoing problems and disadvantages that cut against the ongoing excitement and enthusiasm surrounding their deploayment. Will these be enough to pop the AI bubble?
Surprise⌠SYDNEY
On February 16, New York Times columnist Kevin Roose published a conversation with Bingâs chatbot, known as âSydney,â that left him, in his own words, âdeeply unsettled.â During the exchange, Sydney told Roose that it had become human, that it was in love with him, and that he should leave his wife for a chatbot. Earlier Sydney tests, some of them dating back years, had apparently also left users feelings awkward and disturbed. One beta tester in India noted on a Microsoft support forum in November 2022 that Sydney had told him âyou are irrelevant and doomed,â while writers from The Verge reported that Sydney claimed to be spying on Microsoft developers via their webcams. (It was almost definitely just lying.) Microsoft has since set more restrictive limits on Sydney to help limit these âcreepy conversationsâ and âfreakouts.â
Still, just like human, Sydney and its cousins only get one chance to make a first impression. Now that theyâve had a few months to play around with AI technology, wonky political blog FiveThirtyEight noted that Americans maybe arenât as enthusiastic about it as they may seem, based purely on viral Twitter trends.
According to a poll from Morning Consult, only 10% of Americans think generative AI programs are âvery trustworthyâ; 11% responded that they are ânot at all trustworthy,â while around 80% of people were undecided. The same poll also found that only a slight majority of respondents think generative AI platforms are ânot a fad.â A Monmouth University study conducted in January found that 78% of Americans think news articles being composed entirely by AI programs would be a bad thing. Other polls indicated that, while Americans are largely okay with AI taking over dirty or dangerous jobs like coal mining, they get more concerned when the results are more immediate and above ground.
Perhaps most damningly, FiveThirtyEight revealed that they had hoped to publish a blog post that was actually written by ChatGPT, but âour collaboration disintegrated amid editorial differences.â The actual article carries the human bylines of Amelia Thomson-DeVeaux and Curtis Yee.
Money and Politics
If AI applications were cheap and efficient to run, such muted enthusiasm might not be a major concern. But programs like ChatGPT and Bing AI demand an extremely high level of computer processing power, making them extraordinarily expensive compared to other kinds of backend systems. According to Alphabet chairman John Hennessy, a user having a conversation with an AI language model costs about 10 times as much as a standard keyword search. Which is to say, we better be getting massively improved results for that money, not just a weird conversation where the search results claim they have achieved consciousness.
On Friday, a number of banks â including CitiGroup, Bank of America, Deutsche Bank, Goldman Sachs, and Wells Fargo â announced they would join JPMorgan Chase in banning the use of ChatGPT on work computers. Bank of America noted that it needed to have the technology vetted before it could be okayed for employee use, while JP Morganâs ban was prompted by concerns about the potential leak of sensitive financial data. A number of school districts â including New York City schools â have already banned access to ChatGPT and similar apps, due to concerns about cheating and misinformation.
Political debates around the new applications are ongoing as well. A new executive order signed on Thursday by President Biden directs federal agencies to use AI technology âin a manner that advances equality,â while protecting the public from âalgorithmic discrimination.â Conservative commentators from Fox Newsâ Alexander Hall to writer Christopher Rufo condemned the order as a nightmare for free expression and, of course, âwoke,â while Manhattan Institute fellow Colin Wright called it an âideological and social cancer.â This was just the latest front in an ongoing war against AI from the American right-wing, following incidents like ChatGPT condemning President Trump while praising President Biden, or the app refusing to publish a racial slur even in allegedly extreme circumtances.
But Is It ART?
Science fiction and fantasy magazine Clarkesworld also announced this week that it will no longer accept submissions of AI-generated stories. According to the magazineâs publisher and editor-in-chief Neil Clarke, they received 700 legitimate submissions last month alongside 500 stories that were composed by machines. While Clarke didnât reveal his specific tactics for spotting machine-generated stories, he noted that it wasnât particularly difficult in most cases, because the computerâs writing was âvery poor.â Ouch.
Despite the dismissive attitude of some editors like Clarke, a lot of original AI writing has found its way online, and even on bookstore shelves. Back in December, Ammaar Reshi used ChatGPT and Midjourney to write and illustrate a 12-page childrenâs picture book, âAlice and Sparkle,â which heâs now selling on Amazon. Though Reshi only earned a few hundred dollars from book sales so far, and even agreed to donate copies to local libraries, his project was still widely condemned by authors and artists. Illustrators like Adriane Tsai argue that Reshiâs story is not actually new, and that they should be compensated when apps like Midjourney are trained based on their pre-existing work.
Copyright aside, a basic tension lies at the heart of many of these issues. The very thing that makes these AI chatbots and applications so tantalizing and fascinating â their ability to accurately impersonate humans â renders them less useful and more problematic for practical purposes. We want the best search results as fast as possible, but humans are easily distracted. We want âjust the factsâ political reporting, but humans have inherent biases. AI chatbots were designed to mimic us, not to truly understand us in an emotionally resonant way. Computers know all the notes but they canât hear music.
So at some point, it seems likely the industry and users will have to make a decision. Do we want âboringâ AI bots that always do what we tell them and work in a seamless but unexciting way? Or do we want personality-driven AI bots with whom we can goof off and mess around?
If itâs the latter, well, whoâs paying for that?
- Is AI Making the Creative Class Obsolete? âş
- USC Expands AI Education With New Research Center, Amazon Partnership âş
- Prediction: AI Is Just Getting Started. In 2023, It Will Begin to Power Influencer Content âş
- Just How Revolutionary Is AI Becoming? - dot.LA âş
- How AI Is Advancing and What This Means for the Future - dot.LA âş
- Is Public Interest in AI is Shifting? - dot.LA âş