Four Takeaways From Snap’s AI Chatbot News
Samson Amore is a reporter for dot.LA. He holds a degree in journalism from Emerson College. Send tips or pitches to samsonamore@dot.la and find him on Twitter @Samsonamore.
Snap’s decision to add AI to its app isn’t surprising – the company embraced developing tech and has been pushing augmented reality lenses and filters for years at the behest of CEO Evan Spiegel. How this new addition of an AI chatbot (which it calls “My AI”) is received by the app’s largely Gen Z user base, though, remains to be seen.
Spiegel and his team know that there could be a potentially harmful situation at some point, as evidenced by Snap’s Monday announcement about My AI, which encouraged users to submit feedback on its messages if they contain “biased, incorrect, harmful, or misleading information,” content Snap said the AI is “designed to avoid.” But it might be a matter of time before we’re back with another newsletter discussing some fallout of My AI.
But for now, here’s some preliminary thoughts on Snap's new AI tech.
The price is right
Big picture: Snap needs money. It is growing users steadily, up to 375 million, but still can’t monetize them, so its net loss last year was $1.4 billion. Also in 2022’s fourth quarter Snap’s sales were the lowest ever since the company’s IPO six years ago.
Seemingly everyone is talking about the helpful, weird, or potentially perilous uses of AI now, and Snap’s doing what any tech company worth its salt would do – pivot to capitalize on a hot new product. It is offering the AI product as part of Snapchat+, its premium version of the app that costs roughly $4/month.
Snapchat+ hasn’t seen much user growth since its launch in June (in January Snap said the service had 2 million subscribers, a small portion of its overall users), partly because it didn’t offer many compelling features… but Snap’s betting that adding AI could turn the tide.
Catch-22
Here’s the thing about machine learning: we interact with it every day, as it handles menial tech tasks like tagging iPhone photos or transcribing audio. Most of these AI interactions aren’t sexy, so we don’t pay them much mind. But Microsoft’s Sydney chatbot telling a New York Times reporter it “wants to be alive,” and begging him to leave his wife for it? That’ll make anyone with enough free time eager to try and “break” that AI by pushing it into saying something outrageous.
That said, Snap claims to have trained My AI to avoid topics that would violate its existing safety guidelines, including violence, swears, explicit content, or even, as The Verge reported, “dicey topics like politics.” But it also warned “My AI is prone to hallucination, and can be tricked into saying just about anything.”
So, just because Snap put guardrails in place doesn’t mean the AI can’t learn – or be taught – how to barrel through them. Although Spiegel promised My AI won’t engage with controversial topics, that doesn’t mean a teen with enough free time couldn’t force it into doing so, and get some seriously problematic results.
Here’s the catch-22: The main driver for people tinkering with (and training) AI has been to screw around. I recently tried to break one, and it was quite fun. Yes, Snap is smart to try to minimize any fallout from My AI. But users – especially bored teens who like testing the boundaries of tech – might not find it worth the price if the AI is too clean and wholesome.
Gen Z vs. AI
Maybe it won’t always be PG. While that might be fun for the teenager who could eventually get My AI to swear, it could also be a nightmare for Snap.
AI chatbots can go rogue, some alarmingly fast. Before Sydney, Microsoft developed Tay, a chatbot released in 2016. Tay was supposed to work the same way ChatGPT does, by learning based on interactions with users online. But it was shuttered after going rogue – and startlingly racist – within 16 hours after launch. After Tay, Microsoft built Zo, launched the same year. Zo also failed, but this time for being too politically correct and outright refusing to have certain conversations – much the same way that Snap’s My AI is programmed to do.
And since Snap’s user base is typically people under 30, and teens globally are in a mental health crisis, the stakes here are high. Recent reports indicate that more than 60% of kids with depression don’t receive treatment (with girls and LGBTQ+ teens at higher risk). Snap’s risking a lot by engaging with this new tech especially given its vulnerable user base. All it could take is one incident where My AI triggers a teen to sink the project.
Licensing is the future
Snap is licensing the tech behind My AI from OpenAI, and we’ll likely see more of this as companies try to stay trendy. OpenAI’s SaaS business is growing; it sells licenses for its image and language models that users pay for on an as-needed basis depending on how much content they want to generate.
Besides Snap, global consulting firm Bain & Co. recently announced it’d begin using OpenAI’s tools for business development and its client Coca-Cola is one of the first to express interest. Microsoft is planning to release software that’d let companies make their own chatbots similar to ChatGPT under a licensing model. And numerous other companies including Shopify, Canva and Meta are all implementing ChatGPT into their customer service tools.
Snap should report Q1 earnings in the coming months, so we should expect more details about how Snapchatters engaged with My AI soon.
- AI Is Here To Stay in the Health Tech World ›
- TikTok Launches Text-to-Image Generator AI Greenscreen ›
- A Breakdown of the Data Snapchat Collects on Users ›
- Snapchat’s New Controls Could Let Parents See Their Kids’ Friend Lists ›
- Snapchat Users Remain Controversial Over New 'My Ai' Feature - dot.LA ›
Samson Amore is a reporter for dot.LA. He holds a degree in journalism from Emerson College. Send tips or pitches to samsonamore@dot.la and find him on Twitter @Samsonamore.