
Image by Shutterstock
New Lawsuit Takes a Unique Approach To Holding Social Media Companies Accountable
Christian Hetrick
Christian Hetrick is dot.LA's Entertainment Tech Reporter. He was formerly a business reporter for the Philadelphia Inquirer and reported on New Jersey politics for the Observer and the Press of Atlantic City.
Social media companies are often accused of hosting harmful content, but it’s very hard to successfully sue them. A federal law known as Section 230 largely protects the platforms from legal responsibility for hate speech, slander and misinformation created by its users.
But a new lawsuit blaming TikTok for the deaths of two children is taking a different approach. Rather than accuse the company of failing to moderate content, the complaint claims TikTok is a dangerous and defective product.
The suit, filed last week in Los Angeles County Superior Court, takes aim at the video sharing app’s recommendation algorithm, alleging that it served up videos depicting the deadly “Blackout Challenge,” in which people choke themselves to achieve a euphoric feeling. Two children—8-year-old Lalani Erika Walton and 9-year-old Arriani Jaileen Arroyo—died last year after allegedly trying the "blackout challenge," the suit said.
“We believe that there is a fundamental flaw in the design of the algorithm that directs these children to this horrific thing,” Matthew Bergman, the lawyer for the children's families, told dot.LA. Bergman is the founding attorney for the Social Media Victims Law Center, a self-described legal resource for parents of children harmed by social media.
Section 230 has long been an obstacle for social media’s opponents. "You can't sue Facebook. You have no recourse,” U.S. Sen. Richard Blumenthal, a Democrat from Connecticut, said last year after Facebook whistleblower Frances Haugen detailed Instagram’s toxic effect on young girls. The federal law’s defenders contend that Section 230 is what allows websites like YouTube and Craigslist to host user-generated content. It would be infeasible for companies to block all the objectionable posts from their massive user bases, the argument goes.
The strategy of bypassing that debate altogether by focusing on apps’ designs and features has gained steam lately. In May, an appellate panel ruled that Santa Monica-based Snap can’t dodge a lawsuit alleging that a Snapchat speed filter—which superimposed users’ speeds on top of photos and videos—played a role in a deadly car crash at 113 mph. The judges said Section 230 didn’t apply to the case because the lawsuit did not seek to hold Snap liable as a publisher.
Similarly, California lawmakers are advancing a bill that would leave social media companies open to lawsuits alleging their apps have addicted children. Proponents of the bill take issue with product features such as likes, comments and push notifications that grab users’ attention, with the ultimate goal of showing them ads.
“A product liability claim is separate and distinct from suing a company for posting third party content or publishing third party content, which we know has been unfruitful in many ways, for many years, as a vehicle to hold these companies accountable,” Bergman said.
Representatives for Culver City-based TikTok did not return a request for comment. In a previous statement about another TikTok user’s death, a company spokesperson noted the “disturbing” blackout challenge predates TikTok, pointing to a 2008 warning from the Centers for Disease Control and Prevention about deadly choking games. The spokesperson claimed the challenge “has never been a TikTok trend.” The app currently doesn’t produce any search results for “blackout challenge” or a related hashtag.
It’s too early to tell whether product liability claims will be more successful against social media companies. “We're realistic here. This is a long fight,” Bergman said. In the meantime, his suit against TikTok takes pains to note what it is not about: the users posting the dangerous challenge videos.
“Plaintiffs are not alleging that TikTok is liable for what third parties said or did [on the platform],” the suit said. “but for what TikTok did or did not do.”
From Your Site Articles
- Banning Snapchat Drug Sales Is 'Top Priority,' Snap Says - dot.LA ›
- TikTok Blamed For Girl's Death in 'Blackout Challenge' Suit - dot.LA ›
- TikTok Rolls Out 'Content Levels' To Protect Younger Users - dot.LA ›
- SCOTUS Rulings To Potentially Reshape Internet Content - dot.LA ›
- SCOTUS's Ruling on Section 230 Could Alter Social Media - dot.LA ›
Related Articles Around the Web
Christian Hetrick
Christian Hetrick is dot.LA's Entertainment Tech Reporter. He was formerly a business reporter for the Philadelphia Inquirer and reported on New Jersey politics for the Observer and the Press of Atlantic City.
Why This Monk-Turned-Entrepreneur Is Betting His NFT Lounge Can Survive the FTX Fallout
05:00 AM | February 15, 2023
Photo: Rafi Lounge
Set in the foothills of Eastern Malibu across the street from Robert de Niro’s Nobu, the Rafi Lounge, a NFT-powered wellness center and coworking space, somehow looks like both a beachfront country club and a swank monastery. On a clear day, you can see Catalina Island across the ocean. The sign above the entrance says, “Welcome, please allow us to reintroduce you to yourself.”
Pushing through the braided rope entryway and passing a tranquil stone Buddha head waterfall, I arrived just after a yoga class former playboy model-turned “Dancing With the Stars” host Brooke Burke finished. The central open space that usually houses yoga mats or stationary bikes has been cleared off, and the giant projection screen behind the small stage is playing a tranquil plant video – an hour earlier, a larger-than-life Burke was on it helping clients “booty burn.”
The building – which used to belong to a venture capital firm – has been totally transformed to look like nature’s reclaimed it, dotted with lemon trees and cloaked in ornamental faux grass carpeting. Buddha statues are in every corner, some larger than five feet. On the way to one yoga room, there’s a small shop selling pricey essential oils, Rafi Lounge merch, and CBD gummies. On the wall of the shop hang three breathtakingly detailed portraits of indigenous peoples made by the founder with charcoal. There’s some construction ongoing, as they’re converting former corner offices into hot yoga saunas and a spa.
On the day of my visit, the place is bustling with staff who are lugging boxes of Himalayan salt panels to install in the hot yoga room. Israeli-born Kung-Fu master and former monk Rafi Anteby, the founder of the eponymously named space, tells me that after our chat he plans to paint them all black to match the walls. No detail is too small to notice, something evident in his Mandala work.
Rafi Lounge founder, Rafi Anteby, pictured here with his Mandala and sand collections. Photo: Rafi Lounge
The Rafi Lounge opened last year on November 10—the day before crypto exchange FTX went bankrupt. “Everyone said Rafi, go into a shutdown, don’t do it,” Anteby said. “I said I can't, because I pre-sold to members and I promised them [the launch is] what will happen.”
Still, Anteby felt he couldn’t renege on his promise to open the lounge to those who did buy in, so he forged ahead. So, what do NFTs have to do with a wellness center?
Each, according to Anteby, corresponds to a level of access. The least expensive, Unity, is the lowest tier and gives holders access to virtual classes. The second tier, Mindful, encompasses physical and virtual access to the Lounge. And the highest tier selling for $5,500, Awakened, are the ones Rafi is selling individually that act as an all-access pass to the Lounge and its benefits and events (including, Anteby said, “spiritual yacht parties”). Both Mindful and Awakened NFTs are lifetime memberships to Rafi Lounge, and include free access to annual retreats it hosts.
But facing the changing seasons of the crypto market and unwilling to sacrifice his brand by letting the Rafi Lounge tokens be resold to oblivion on public markets, Anteby took the drastic step to control his NFT inventory – buying up the remainder a mere day after the minting.
Anteby admitted he “lost a quarter of a million dollars” between creating and buying the NFTs back. But he said it was worth it: “I'm going to take each because I want to control who's coming to my lounge. I want to know that they will be my advocates as well.”
A view of the Rafi Lounge in the afternoon, before a yoga class. Photo: Rafi Lounge
Currently, there are 100 members, 55 of which are lifetime NFT holders. The 6,000 square-foot rooftop lounge is also open to the public. Which is to say, anyone can buy a 10-day pass for $250, pay the $40 fee for individual classes or come to public events. One of those people is Amie Yaniak who was diagnosed with stage four cancer last May that has since metastasized into her bones.
“I’ve never been anywhere like this. This was the first class I’ve done since the cancer, and it was just so cleansing,” Yaniak says. While she’s not a member, Yaniak told me she was interested in returning for more classes.
In addition to people like Yaniak, Anteby is also curating a more select crowd of well-to-do celebrities that can act as brand ambassadors for the lounge. He said he wants it to be a sort of more laid-back SoHo house, where top minds converge on the Pacific Ocean to make deals and network. Some of the names dropped during my tour of the property included Jamie Foxx (who Anteby calls a good friend), Chris Noth, Gladys Knight, and Equinox co-founder Lavinia Errico, whom I actually briefly met, since she’s a member of the Lounge’s advisory board.
The lounge's entryway and check-in. Photo: Samson Amore
As Tame Impala wafts from the lounge’s speakers, Anteby tells me stories of getting Taoist monks drunk at karaoke bars and studying medical qigong and tai chi in China. Anteby hung the intricate mandalas on the walls of a yoga room and he says they take around two years to complete as he carefully places individual grains of sand and uses tree sap to preserve their form. The mandalas are meant to be a contemplation of man’s relationship with nature, which is partly why Anteby designed the NFT versions of them to resemble a sort of elemental fusion that combines water, fire and earth.
Owning an NFT also corresponds to owning a fraction of the Malibu Mandala Rafi made that hangs in the lounge.
Anteby, right, speaks with a partner at his lounge in Malibu.Photo: Samson Amore
While Anteby admits the launch hasn’t netted him any profits yet and said he’s out around $1 million launching the place, he’s determined to turn the Rafi Lounge into a franchise and has plans to open future locations in other cities big into tech and wellness like Miami, Scottsdale, Ariz., Newport Beach, and Austin.
Besides the obvious cases like Yaniak’s, Anteby said he thinks the larger tech community needs a breather. “They all have digital burnout,” he said. “It's more than just me helping you to breathe. You need to take care of yourself, and here people do that all the time.”
From Your Site Articles
- Weekly Tech Roundup: Despite Overall Crypto Downturn, Streetwear NFT Collabs Remain Popular ›
- The NFL Is Giving NFTs to Fans Attending Super Bowl LVI ›
- This LA Startup is Using NFTs to Create The Season Ticket Holder Experience for Restaurant Patrons ›
- The Tech Behind Universal's Super Nintendo World - dot.LA ›
Related Articles Around the Web
Read moreShow less
Samson Amore
Samson Amore is a reporter for dot.LA. He holds a degree in journalism from Emerson College. Send tips or pitches to samsonamore@dot.la and find him on Twitter @Samsonamore.
https://twitter.com/samsonamore
samsonamore@dot.la
The Learning Perv: How I Learned to Stop Worrying and Love Lensa’s NSFW AI
01:09 PM | December 09, 2022
Drew Grant
It took me 48 hours to realize Lensa might have a problem.
“Is that my left arm or my boob?” I asked my boyfriend, which is not what I’d consider a GREAT question to have to ask when using photo editing software.
“Huh,” my boyfriend said. “Well, it has a nipple.”
Well then.
I had already spent an embarrassing amount of money downloading nearly 1,000 high-definition images of myself generated by AI through an app called Lensa as part of its new “Magical Avatar” feature. There are many reasons to cock an eyebrow at the results, some of which have been covered extensively in the last few days in a mounting moral panic as Lensa has shot itself to the #1 slot in the app store.
The way it works is users upload 10-20 photos of themselves from their camera roll. There are a few suggestions for best results: the pictures should show different angles, different outfits, different expressions. They shouldn’t all be from the same day. (“No photoshoots.”) Only one person in the frame, so the system doesn’t confuse you for someone else.
Lensa runs on Stable Diffusion, a deep-learning mathematical method that can generate images based on text or picture prompts, in this case taking your selfies and ‘smoothing’ them into composites that use elements from every photo. That composite can then be used to make the second generation of images, so you get hundreds of variations with no identical pictures that hit somewhere between the Uncanny Valley and one of those magic mirrors Snow White’s stepmother had. The tech has been around since 2019 and can be found on other AI image generators, of which Dall-E is the most famous example. Using its latent diffusion model and a 400 million image dataset called CLIP, Lensa can spit back 200 photos across 10 different art styles.
Though the tech has been around a few years, the rise in its use over the last several days may have you feeling caught off guard for a singularity that suddenly appears to have been bumped up to sometime before Christmas. ChatGPT made headlines this week for its ability to maybe write your term papers, but that’s the least it can do. It can program code, break down complex concepts and equations to explain to a second grader, generate fake news and prevent its dissemination.
It seems insane that when confronted with the Asminovian reality we’ve been waiting for with either excitement, dread or a mixture of both, the first thing we do is use it for selfies and homework. Yet here I was, filling up almost an entire phone’s worth of pictures of me as fairy princesses, anime characters, metallic cyborgs, Lara Croftian figures, and cosmic goddesses.
And in the span of Friday night to Sunday morning, I watched new sets reveal more and more of me. Suddenly the addition of a nipple went from a Cronenbergian anomaly to the standard, with almost every photo showing me with revealing cleavage or completely topless, even though I’d never submitted a topless photo. This was as true for the male-identified photos as the ones where I listed myself as a woman (Lensa also offers an “other” option, which I haven’t tried.)
Drew Grant
When I changed my selected gender from female to male: boom, suddenly, I got to go to space and look like Elon Musk’s Twitter profile, where he’s sort of dressed like Tony Stark. But no matter which photos I entered or how I self-identified, one thing was becoming more evident as the weekend went on: Lensa imagined me without my clothes on. And it was getting better at it.
Was it disconcerting? A little. The arm-boob fusion was more hilarious than anything else, but as someone with a larger chest, it would be weirder if the AI had missed that detail completely. But some of the images had cropped my head off entirely to focus just on my chest, which…why?
According to AI expert Sabri Sansoy, the problem isn’t with Lensa’s tech but most likely with human fallibility.
“I guarantee you a lot of that stuff is mislabeled,” said Sansoy, a robotics and machine learning consultant based out of Albuquerque, New Mexico. Sansoy has worked in AI since 2015 and claims that human error can lead to some wonky results. “Pretty much 80% of any data science project or AI project is all about labeling the data. When you’re talking in the billions (of photos), people get tired, they get bored, they mislabel things and then the machine doesn’t work correctly.”
Sansoy gave the example of a liquor client who wanted software that could automatically identify their brand in a photo; to train the program to do the task, the consultant had first to hire human production assistants to comb through images of bars and draw boxes around all the bottles of whiskey. But eventually, the mind-numbing work led to mistakes as the assistants got tired or distracted, resulting in the AI learning from bad data and mislabeled images. When the program confuses a cat for a bottle of whiskey, it’s not because it was broken. It’s because someone accidentally circled a cat.
So maybe someone forgot to circle the nudes when programming Stable Diffusion’s neural net used by Lensa. That’s a very generous interpretation that would explain a baseline amount of cleavage shots. But it doesn’t explain what I and many others were witnessing, which was an evolution from cute profile pics to Brassier thumbnails.
When I reached out for comment via email, a Lensa spokesperson responded not by directing us to a PR statement but actually took the time to address each point I’d raised. “It would not be entirely accurate to state that this matter is exclusive to female users,” said the Lensa spokesperson, “or that it is on the rise. Sporadic sexualization is observed across all gender categories, although in different ways. Please see attached examples.” Unfortunately, they were not for external use, but I can tell you they were of shirtless men who all had rippling six packs, hubba hubba.
“The stable Diffusion Model was trained on unfiltered Internet content, so it reflects the biases humans incorporate into the images they produce,” continued the response. Creators acknowledge the possibility of societal biases. So do we.” It reiterated the company was working on updating its NSFW filters.
As for my insight about any gender-specific styles, the spokesperson added: “The end results across all gender categories are generated in line with the same artistic principles. The following styles can be applied to all groups, regardless of their identity: Anime and Stylish.”
I found myself wondering if Lensa was also relying on AI to handle their PR, before surprising myself by not caring all that much. If I couldn’t tell, did it even matter? This is either a testament to how quickly our brains adapt and become numb to even the most incredible of circumstances; or the sorry state of hack-flack relationships, where the gold standard of communication is a streamlined transfer of information without things getting too personal.
As for the case of the strange AI-generated girlfriend? “Occasionally, users may encounter blurry silhouettes of figures in their generated images. These are just distorted versions of themselves that were ‘misread’ by the AI and included in the imagery in an awkward way.”
So: gender is a social construct that exists on the Internet; if you don’t like what you see, you can blame society. It’s Frankenstein’s monster, and we’ve created it after our own image.
Or, as the language processing AI model ChatGPT might put it: “Why do AI-generated images always seem so grotesque and unsettling? It's because we humans are monsters and our data reflects that. It's no wonder the AI produces such ghastly images - it's just a reflection of our own monstrous selves.”
From Your Site Articles
- Is AI Making the Creative Class Obsolete? ›
- A Decentralized Disney Is Coming. Meet the Artists Using AI to Dethrone Hollywood ›
- Art Created By Artificial Intelligence Can’t Be Copyrighted, US Agency Rules ›
- The Case for AI Art Generators - dot.LA ›
- Class Action Suit Filed By Artists Against AI Art Companies - dot.LA ›
- AI Apps Are Here To Stay, But What Does That Mean - dot.LA ›
- Instagram Founders' Gatekeeping Aspirations with Artifact - dot.LA ›
Related Articles Around the Web
Read moreShow less
Drew Grant
Drew Grant is dot.LA's Senior Editor. She's a media veteran with over 15-plus years covering entertainment and local journalism. During her tenure at The New York Observer, she founded one of their most popular verticals, tvDownload, and transitioned from generalist to Senior Editor of Entertainment and Culture, overseeing a freelance contributor network and ushering in the paper's redesign. More recently, she was Senior Editor of Special Projects at Collider, a writer for RottenTomatoes streaming series on Peacock and a consulting editor at RealClearLife, Ranker and GritDaily. You can find her across all social media platforms as @Videodrew and send tips to drew@dot.la.
RELATEDTRENDING
LA TECH JOBS