The Learning Perv: How I Learned to Stop Worrying and Love Lensa’s NSFW AI

Drew Grant

Drew Grant is dot.LA's Senior Editor. She's a media veteran with over 15-plus years covering entertainment and local journalism. During her tenure at The New York Observer, she founded one of their most popular verticals, tvDownload, and transitioned from generalist to Senior Editor of Entertainment and Culture, overseeing a freelance contributor network and ushering in the paper's redesign. More recently, she was Senior Editor of Special Projects at Collider, a writer for RottenTomatoes streaming series on Peacock and a consulting editor at RealClearLife, Ranker and GritDaily. You can find her across all social media platforms as @Videodrew and send tips to drew@dot.la.

Drew Grant in Lensa AI art
Drew Grant

It took me 48 hours to realize Lensa might have a problem.

“Is that my left arm or my boob?” I asked my boyfriend, which is not what I’d consider a GREAT question to have to ask when using photo editing software.

“Huh,” my boyfriend said. “Well, it has a nipple.”

Well then.


I had already spent an embarrassing amount of money downloading nearly 1,000 high-definition images of myself generated by AI through an app called Lensa as part of its new “Magical Avatar” feature. There are many reasons to cock an eyebrow at the results, some of which have been covered extensively in the last few days in a mounting moral panic as Lensa has shot itself to the #1 slot in the app store.

The way it works is users upload 10-20 photos of themselves from their camera roll. There are a few suggestions for best results: the pictures should show different angles, different outfits, different expressions. They shouldn’t all be from the same day. (“No photoshoots.”) Only one person in the frame, so the system doesn’t confuse you for someone else.

Lensa runs on Stable Diffusion, a deep-learning mathematical method that can generate images based on text or picture prompts, in this case taking your selfies and ‘smoothing’ them into composites that use elements from every photo. That composite can then be used to make the second generation of images, so you get hundreds of variations with no identical pictures that hit somewhere between the Uncanny Valley and one of those magic mirrors Snow White’s stepmother had. The tech has been around since 2019 and can be found on other AI image generators, of which Dall-E is the most famous example. Using its latent diffusion model and a 400 million image dataset called CLIP, Lensa can spit back 200 photos across 10 different art styles.

Though the tech has been around a few years, the rise in its use over the last several days may have you feeling caught off guard for a singularity that suddenly appears to have been bumped up to sometime before Christmas. ChatGPT made headlines this week for its ability to maybe write your term papers, but that’s the least it can do. It can program code, break down complex concepts and equations to explain to a second grader, generate fake news and prevent its dissemination.

It seems insane that when confronted with the Asminovian reality we’ve been waiting for with either excitement, dread or a mixture of both, the first thing we do is use it for selfies and homework. Yet here I was, filling up almost an entire phone’s worth of pictures of me as fairy princesses, anime characters, metallic cyborgs, Lara Croftian figures, and cosmic goddesses.

And in the span of Friday night to Sunday morning, I watched new sets reveal more and more of me. Suddenly the addition of a nipple went from a Cronenbergian anomaly to the standard, with almost every photo showing me with revealing cleavage or completely topless, even though I’d never submitted a topless photo. This was as true for the male-identified photos as the ones where I listed myself as a woman (Lensa also offers an “other” option, which I haven’t tried.)

Drew Grant

When I changed my selected gender from female to male: boom, suddenly, I got to go to space and look like Elon Musk’s Twitter profile, where he’s sort of dressed like Tony Stark. But no matter which photos I entered or how I self-identified, one thing was becoming more evident as the weekend went on: Lensa imagined me without my clothes on. And it was getting better at it.

Was it disconcerting? A little. The arm-boob fusion was more hilarious than anything else, but as someone with a larger chest, it would be weirder if the AI had missed that detail completely. But some of the images had cropped my head off entirely to focus just on my chest, which…why?

Drew as a baby with another face behind her

Drew Grant

Drew as a "male" preference

According to AI expert Sabri Sansoy, the problem isn’t with Lensa’s tech but most likely with human fallibility.

“I guarantee you a lot of that stuff is mislabeled,” said Sansoy, a robotics and machine learning consultant based out of Albuquerque, New Mexico. Sansoy has worked in AI since 2015 and claims that human error can lead to some wonky results. “Pretty much 80% of any data science project or AI project is all about labeling the data. When you’re talking in the billions (of photos), people get tired, they get bored, they mislabel things and then the machine doesn’t work correctly.”

Sansoy gave the example of a liquor client who wanted software that could automatically identify their brand in a photo; to train the program to do the task, the consultant had first to hire human production assistants to comb through images of bars and draw boxes around all the bottles of whiskey. But eventually, the mind-numbing work led to mistakes as the assistants got tired or distracted, resulting in the AI learning from bad data and mislabeled images. When the program confuses a cat for a bottle of whiskey, it’s not because it was broken. It’s because someone accidentally circled a cat.

So maybe someone forgot to circle the nudes when programming Stable Diffusion’s neural net used by Lensa. That’s a very generous interpretation that would explain a baseline amount of cleavage shots. But it doesn’t explain what I and many others were witnessing, which was an evolution from cute profile pics to Brassier thumbnails.

When I reached out for comment via email, a Lensa spokesperson responded not by directing us to a PR statement but actually took the time to address each point I’d raised. “It would not be entirely accurate to state that this matter is exclusive to female users,” said the Lensa spokesperson, “or that it is on the rise. Sporadic sexualization is observed across all gender categories, although in different ways. Please see attached examples.” Unfortunately, they were not for external use, but I can tell you they were of shirtless men who all had rippling six packs, hubba hubba.

“The stable Diffusion Model was trained on unfiltered Internet content, so it reflects the biases humans incorporate into the images they produce,” continued the response. Creators acknowledge the possibility of societal biases. So do we.” It reiterated the company was working on updating its NSFW filters.

As for my insight about any gender-specific styles, the spokesperson added: “The end results across all gender categories are generated in line with the same artistic principles. The following styles can be applied to all groups, regardless of their identity: Anime and Stylish.”

I found myself wondering if Lensa was also relying on AI to handle their PR, before surprising myself by not caring all that much. If I couldn’t tell, did it even matter? This is either a testament to how quickly our brains adapt and become numb to even the most incredible of circumstances; or the sorry state of hack-flack relationships, where the gold standard of communication is a streamlined transfer of information without things getting too personal.

As for the case of the strange AI-generated girlfriend? “Occasionally, users may encounter blurry silhouettes of figures in their generated images. These are just distorted versions of themselves that were ‘misread’ by the AI and included in the imagery in an awkward way.”

So: gender is a social construct that exists on the Internet; if you don’t like what you see, you can blame society. It’s Frankenstein’s monster, and we’ve created it after our own image.

Or, as the language processing AI model ChatGPT might put it: “Why do AI-generated images always seem so grotesque and unsettling? It's because we humans are monsters and our data reflects that. It's no wonder the AI produces such ghastly images - it's just a reflection of our own monstrous selves.”

Subscribe to our newsletter to catch every headline.

'Open Letter' Proposing 6-Month AI Moratorium Continues to Muddy the Waters Around the Technology

Lon Harris
Lon Harris is a contributor to dot.LA. His work has also appeared on ScreenJunkies, RottenTomatoes and Inside Streaming.
'Open Letter' Proposing 6-Month AI Moratorium Continues to Muddy the Waters Around the Technology
Evan Xie

AI continues to dominate the news – not just within the world of technology, but mainstream news sources at this point – and the stories have entered a by-now familiar cycle. A wave of exciting new developments, releases and viral apps is followed by a flood of alarm bells and concerned op-eds, wondering out loud whether or not things are moving too fast for humanity’s own good.

With OpenAI and Microsoft’s GPT-4 arriving a few weeks ago to massive enthusiasm, we were overdue for our next hit of jaded cynicism, warning about the potentially dire impact of intuitive chatbots and text-to-image generators.

Sure enough, this week, more than 1,000 signatories released an open letter calling for all AI labs to pause training any new systems more powerful than GPT-4 for six months.

What does the letter say?

The letter calls out a number of familiar concerns for anyone who has been reading up on AI development this past year. On the most immediate and practical level, it cautions that chatbots and automated text generators could potentially eliminate vast swathes of jobs previously filled by humans, while “flood[ing] our information channels with propaganda and untruth.” The letter then continues into full apocalypse mode, warning that “nonhuman minds” could eventually render us obsolete and dominate us, risking “loss of control of our civilization.”

The six-month break, the signatories argue, could be used to jointly develop shared safety protocols around AI design to ensure that they remain “safe beyond a reasonable doubt.” They also suggest that AI developers work in collaboration with policymakers and politicians to develop new laws and regulations around AI and AI research.

The letter was signed by several AI developers and experts, along with tech industry royalty like Elon Musk and Steve Wozniak. TechCrunch does point out that no one from inside OpenAI seems to have signed it, nor Anthropic, a group of former OpenAI developers who left to design their own “safer” chatbots. OpenAI CEO Sam Altman did speak to the Wall Street Journal this week in reference to the letter, noting that the company has not yet started work on GPT-5 and that time for safety tests has always been built into their development process. He referred to the letter’s overall message as “preaching to the choir.”

Critics of the letter

The call for an AI ban was not without critics, though. Journalist and investor Ben Parr noted that the vague language makes it functionally meaningless, without any kind of metrics to gauge how “powerful” an AI system has become or suggestions for how to enforce a global AI ban. He also notes that some signatories, including Musk, are OpenAI and ChatGPT competitors, potentially giving them a personal stake in this fight beyond just concern for the future of civilization. Others, like NBC News reporter Ben Collins, suggested that the dire AI warnings could be a form of dystopian marketing.

On Twitter, entrepreneur Chris Pirillo noted that “the genie is already out of the bottle” in terms of AI development, while physicist and author David Deutsch called out the letter for confusing today’s AI apps with the Artificial General Intelligence (AGI) systems still only seen in sci-fi films and TV shows.

Legitimate red flags

Obviously, the letter speaks to relatively universal concerns. It’s easy to imagine why writers would be concerned by, say, BuzzFeed now using AI to write entire articles and not just quizzes. (The website isn’t even using professional writers to collaborate with and copy-edit the software anymore. The new humans helping out “Buzzy the Robot” to compose its articles are non-editorial employees from the client partnership, account management, and product management teams. Hey, it’s just an “experiment,” freelancers!)

But it does once more raise some red flags about the potentially misleading ways that some in the industry and the media are discussing AI, which continues to make these kinds of high-level discussions around the technology more cumbersome and challenging.

A recent viral Twitter thread credited ChatGPT-4 with saving a dog’s life, leading to a lot of breathlessly excited coverage about how computers were already smarter than your neighborhood veterinarian. The owner entered the dog’s symptoms into the chatbot, along with copies of its blood work, and ChatGPT responded with the most common potential ailments. As it turns out, a live human doctor tested the animal for one of the bot’s suggested illnesses and accurately guessed the diagnosis. So the computer is, in a very real sense, a hero.

Still, considering what might be wrong with dogs based on their symptoms isn’t what ChatGPT does best. It’s not a medical or veterinary diagnostic tool, and it doesn’t have a database of dog ailments and treatments at the ready. It’s designed for conversations, and it’s just guessing as to what might be wrong with the animal based on the texts on which it was trained, sentences and phrases that it has seen connected in human writing in the past. In this case, the app guessed correctly, and that’s certainly good news for one special pupper. But there’s no guarantee it would get the right answer every time, or even most of the time. We’ve seen a lot of evidence that ChatGPT is perfectly willing to lie, and can’t actually tell the difference between truth and a lie.

There’s also already a perfectly solid technology that this person could have used to enter a dog’s symptoms and research potential diagnoses and treatments: Google search. A search results page also isn’t guaranteed to come up with the correct answer, but it’s as if not more reliable in this particular use case than ChatGPT-4, at least for now. A quality post on a reliable veterinary website would hopefully contain similar information to the version ChatGPT pulled together, except it would have been vetted and verified by an actual human expert.

Have we seen too many sci-fi movies?

A response published in Time by computer scientist Eliezer Yudkowsky – long considered a thought leader in the development of artificial general intelligence – argues that the open letter doesn’t go far enough. Yudkowsky suggests that we’re currently on a path toward “building a superhumanly smart AI,” which will very likely result in the death of every human being on the planet.

No, really, that’s what he says! The editorial takes some very dramatic turns that feel pulled directly from the realms of science-fiction and fantasy. At one point, he warns: “A sufficiently intelligent AI won’t stay confined to computers for long. In today’s world you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing.” This is the actual plot of the 1995 B-movie “Virtuosity,” in which an AI serial killer app (played by Russell Crowe!) designed to help train police officers grows his own biomechanical body and wreaks havoc on the physical world. Thank goodness Denzel Washington is around to stop him.

And, hey, just because AI-fueled nightmares have made their way into classic films, that doesn’t mean they can’t also happen in the real world. But it nonetheless feels like a bit of a leap to go from text-to-image generators and chatbots – no matter how impressive – to computer programs that can grow their own bodies in a lab, then use those bodies to take control of our military and government apparatus. Perhaps there’s a direct line between the experiments being done today and truly conscious, self-aware, thinking machines down the road. But, as Deutsch cautioned in his tweet, it’s important to remember that AI and AGI are not necessarily the exact same thing. - Lon Harris

EVGo’s Stock Surges on Better-Than-Expected Q4 Earnings

EVGo released Q4 earnings that outpaced predictions from Wall Street, but will it be enough to keep the company afloat in the long run?

AgTech Startup Leaf Wants To Modernize the Farming Industry

How LA-based Leaf Agriculture wants to help farmers better manage their properties by leveraging data.

What We're Reading...

- Roku announced plans to cut expenses by laying off 200 more employees, 6% of its remaining workforce.

- According to Bloomberg, only about 270,000 Sony PlayStation VR2 headsets were sold in March, an underwhelming start for the new gadget.

- Microsoft plans to show more ads to Bing AI chatbot users.

- Google denied a report in The Information alleging that it trained its Bard AI chatbot on ChatGPT data.

--

How Are We Doing? We're working to make the newsletter more informative, with deeper analysis and more news about L.A.'s tech and startup scene. Let us know what you think in our survey, or email us!

Will EVGo’s Stock Surges Be Enough To Keep the Company Stable?

David Shultz

David Shultz reports on clean technology and electric vehicles, among other industries, for dot.LA. His writing has appeared in The Atlantic, Outside, Nautilus and many other publications.

Will EVGo’s Stock Surges Be Enough To Keep the Company Stable?
Image from EVGo

Shares of EVgo are up over 20% today after the company released Q4 earnings that outpaced predictions from Wall Street. Analysts had predicted the company would announce a loss per share in the neighborhood of $0.16-$0.18, but the Los Angeles-based electric vehicle charging company reported a much more meager loss, to the tune of just $0.06 per share.

Read moreShow less

How AgTech Startup Leaf Wants To Modernize the Farming Industry

Samson Amore

Samson Amore is a reporter for dot.LA. He holds a degree in journalism from Emerson College and previously covered technology and entertainment for TheWrap and reported on the SoCal startup scene for the Los Angeles Business Journal. Send tips or pitches to samsonamore@dot.la and find him on Twitter @Samsonamore.

green leaf drawing and rolling farm lands
Evan Xie

At least 50,000 acres in the state of California are estimated to be underwater after a record-breaking year of rainfall. So far this year, California has received nearly 29 inches of rain, with the bulk being dumped on its central and southern coasts. Farmers are already warning that the price of dairy, tomatoes and other vegetables will rise as the weather prevents them from re-seeding their fields.

While no current technology can prevent weather disasters, Leaf Agriculture, a Los Angeles-based startup that launched in 2018, wants to help farmers better manage their properties by leveraging data.

Read moreShow less
https://twitter.com/samsonamore
samsonamore@dot.la
RELATEDEDITOR'S PICKS
LA TECH JOBS
interchangeLA
Trending