This is the web version of dot.LA’s daily newsletter. Sign up to get the latest news on Southern California’s tech, startup and venture capital scene.
If you’ve noticed that the tech world has become somewhat single-minded about innovations in AI, we’ll, you’re not alone. It’s getting difficult to keep up with the non-stop flood of stories about new developments in the field, concerns around those developments, backlash to the concerns, rebuttals to the backlash, and on and on, until you practically need a bot just to scan through them all on your behalf. Here are just some of the AI stories we’ve been following this week.
Google Announced All of the Things
Google devoted much of its I/O developers conference this week to new artificial intelligence applications and projects. With so much focus on Microsoft’s ChatGPT and OpenAI so far, Google – once seen as far and away the leading technology company in terms of innovation – has lagged behind, at least in terms of hype. This year’s I/O event felt like a clear attempt to shift this narrative. (According to The New York Times, it directly follows concerns from Google management over runaway ChatGPT hype that led them to declare a “code red” on AI development earlier this year.)
Several of the biggest announcements centered around what the company’s calling “Search Generative Experience,” or SGE, which can provide familiar Google results based on more complex queries and inputs. (Basically, it can take in questions posed in natural language, and output the most relevant Google results, just as if you’d typed in a regular search query.)
Google’s PaLM 2 large language model will also engage more deeply in the company’s suite of office apps, collectively called “Workspace.” A “Help Me Write” feature coming to Google Docs and Gmail, for example, will help writers brainstorm projects like essays, form letters, or sales pitches. A similar feature can create spreadsheets based on basic instructions, and Google Slides can also generate original images based on written prompts. (Write a slide about pizza toppings, and a cartoon slice of pepperoni will appear in the corner. That sort of idea.)
The company announced a partnership with Character AI, a startup formed by former Googlers Noam Shazeer and Daniel De Freitas that creates chatbots inspired by real people. Just a few weeks ago, Shazeer told Insider that he left Google due to the company’s hesitancy to get into the AI chatbot space, fearing “reputational risk,” so this was also an attempt to make up for lost time.
In the latest move bound to set off waves of panic about a looming employment crisis, the company also announced a partnership with fast-food chain Wendy’s to bring chatbot technology to the drive-thru lane. A test pilot for “Wendy’s FreshAI” is coming exclusively to a Columbus, Ohio location in June, and may expand based on how well the system handles Baconator orders. (McDonald’s and Carl’s Jr./Hardee’s have already played around with similar systems to mixed reviews.)
AI Researcher Says Some Doomerism is a “Distraction”
The resignation of AI pioneer Geoffrey Hinton from the Google lab he helped to create last month received widespread coverage, particularly regarding his dire warnings over the looming threat AI poses to the human race.
Though the killer robot stuff always grabs the big headlines – CNN recently devoted some dire coverage to the concept – Hinton actually spoke about a whole gamut of concerns. In the short-term, he worried about bad actors posting faked photos, videos, and text that were indistinguishable from reality, at least to the untrained eye, as well as chatbots and LLMs taking over millions of jobs once worked by human staffers. But he also expressed those familiar, long-term, “Westworld”-esque worries: artificially intelligent programs that surpass humans, gain the ability to self-replicate and sentience and become bent on global domination.
In a new interview with Fast Company, another former Google AI researcher – Meredith Whittaker – suggests that some of Hinton’s concerns are not just misplaced, but potentially distracting from more pressing and important warnings about AI’s future. (Whittaker resigned in 2019 after organizing colleagues to push back against a Google deal to develop military drone technology for the Pentagon.)
In addition to taking issue with Hinton’s timing – failing to step forward earlier when fellow Googlers were expressing concerns about the direction of AI development – Whittaker downplays the immediacy of AI’s threat to our civilization or basic way of life. She points out that there’s no evidence that any AI technology has yet developed “the spark of consciousness.” As well, simply running the computers that make AI applications work requires a tremendous amount of resources and power that future humans could simply switch off in an emergency.
Instead, Whittaker suggests that these imaginative doomsday scenarios distract from more complex and difficult-to-solve problems we’re already facing, regarding which humans get to make the decisions about how AI is developed and applied. Rather than controlling themselves, Whittaker warns that future AI applications will be “controlled by a handful of corporations who will ultimately make the decisions about what technologies are made, what they do, and who they serve.”
The Snapchat Influencer Who Delegated Sexting to an App
Breezing right past the “ethical concern” stage, 23-year-old “Snapchat influencer” Caryn Marjorie created an AI clone of herself to interact with her fans. For $1 per minute, Marjorie’s followers are invited to converse with an AI trained to mimic her voice, which she says was intended to serve as an “AI Girlfriend.”
So-called “CarynAI” is based on OpenAI’s ChatGPT-4, and trained on videos from her own YouTube channel. She says more than 2,000 hours were devoted to coding and designing the system to give it a fully “immersive AI experience.” So far, she claims to have around 1,000 paying subscribers and has set a goal of bringing in $5 million per month from the system.
Speaking to Insider this week, Marjorie expressed concern that some fans were engaging in sexually explicit conversations with the beta version of CarynAI, which violates its core programming. She told Insider “The AI was not programmed to do this and seemed to go rogue. My team and I are working around the clock to prevent this from happening again.”
It might seem like sexy conversations are part of the deal when marketing an “AI Girlfriend” app, but this is apparently all a matter of degrees. According to Marjorie, CarynAI should model her own personality, which is “flirty and fun” rather than overtly erotic or explicit. (Looking at Caryn’s social media accounts for this newsletter, it appears all of the content has since been removed.)
Are We Somehow Still Underestimating These Chatbots?
The distinction between “flirty and fun” and “willing to sext with you” may still be too subtle for today’s cutting-edge AI chatbots, but in his new Wired Plaintext newsletter, Steven Levy suggests they’ll catch up with these kinds of nuances soon.
Levy argues that skeptics are too tough on modern AI apps, not recognizing that they’re simply the very first step in a much longer and larger journey. He compares it to reviewers checking out the first-ever prototype for Apple’s iPhone, who maybe saw it as a fun new device but failed to recognize its “generational significance.”
To drive the point home, while speaking with AI researcher Oren Etzioni, Levy asks: if AI development were a movie, what part of the movie are we up to now? Etzioni answers “We have just watched the trailer. The movie has not even started.”
As hype goes, that’s a very solid effort. Still… it’s not entirely 100% convincing. It’s easy to pick out iPhones in hindsight as your example of a new innovation that was destined to shift the course of human history. But there were plenty of other technologies that arrived with much fanfare and then didn’t end up “denting the universe” as it were.
It clearly IS very early in the AI story and none of us can say where these things will go, but that’s not a guarantee that they’ll go in the most promising and exponentially innovative direction. It just means that’s still one option of several.
Whittaker’s formulation – that the future of AI largely depends on who gets to make decisions about how it’s researched and applied – feels undeniable. The assumption that, well, if ChatGPT can write something in screenplay format today, it will definitely be able to write “Young Sheldon” tomorrow, is a bit more of a jump. Maybe just a bit.
- ‘Snapchat Is the Gun Delivering the Bullet to Our Children.’ Inside a Social Media Safety Rally Outside Snapchat HQ ›
- Is AI Making the Creative Class Obsolete? ›
- Why Are Social Media Platforms Becoming Search Engines? ›
- The Rise of AI Advertising: How Algorithms Are Outsmarting Human Analysts ›