
Get in the KNOW
on LA Startups & Tech
Xmoderation
Facebook is letting misinformation around COVID-19 vaccines and election fraud run rampant through posts in Spanish, a group of advocacy organizations and lawmakers said.
"Facebook continues to fail to effectively moderate Spanish-language misinformation and online hate targeting the Latino community," said Jessica Cobian, senior campaign manager on tech policy at the Center for American Progress.
The Center for American Progress along with Free Press and the National Hispanic Media Coalition are running the campaign "Ya Basta Facebook" or "Enough Already, Facebook" under a coalition they've named the Real Facebook Oversight Board.
In a press briefing Wednesday, the group referenced an April 2020 report from the advocacy group Avaaz that found Facebook flagged 70% of misleading or false posts surrounding COVID-19 but only 30% of similar posts in Spanish.
"It's really hard to track what's happening and how prolific the problem is," said Jessica J. González, co-CEO of Free Press. "There's a lack of transparency from Facebook about what's getting taken down."
In an email to dot.LA, Facebook spokesperson Kevin McAlister said the company is "taking aggressive steps to fight misinformation in Spanish and dozens of other languages."
"A key part of getting accurate information out is working with communities, which is why we're providing free ads to health organizations to promote reliable information about COVID-19 vaccines," he added.
But, Cobian pointed to a handful of posts she said Facebook has refused to take down even though they violate the company's own policies. One post in Spanish, from June of 2020, shows photos of armed men with a caption mistranslated in English to "Stop proud to defend your country."
Had the caption been correctly translated, the algorithm would have flagged it.
"The actual translation in Spanish should read, 'Stand proud to defend your country," said Cobian. "The correct translation shows that the post violates their policy against dangerous individuals and organizations."
In other cases, Cobian said, Facebook flagged posts about voter fraud with a "False information" tag instead of removing them from the platform.
Part of the problem is a lack of representation at Facebook, advocates argue.
"Facebook is headquartered in California, where Latinos are 40% of the population," said Brenda Victoria Castillo, president and CEO of the National Hispanic Media Coalition. "Yet Facebook has little to no Latino representation on their board and C-Suite positions."
She said the coalition came about after her group, based in Los Angeles, sent a letter to Facebook in September 2020 outlining their concerns. She didn't hear back. Two months later, the new group formed and sent a second letter.
Meanwhile during congressional hearings in November, Mark Zuckerberg was asked how he would prevent the spread of Spanish-language misinformation ahead of the Georgia runoff elections.
"This is something that we are already working on and worked on ahead of the general election," he replied. "We're certainly committed to focusing on this."
The group met with Zuckerberg in December and presented a PowerPoint of a dozen Facebook posts they found concerning. They still weren't satisfied with the company's response.
Among the group's demands is that Facebook hire a C-Suite position to oversee U.S. Spanish-language content moderation. Ya Basta Facebook is also calling on Facebook to "publicly explain the translation process of content moderation algorithms."
Congressman Tony Cárdenas, a Democrat who represents a heavily Latino portion of the San Fernando Valley, said he will ask Mark Zuckerberg about his plans for moderation during a House hearing next week with tech CEOs.
"This is not going to be the first time we have Mark Zuckerberg and others in front of the Energy and Commerce Committee," he said. "But every time we do have them there, it's unfortunate that they tend to give us rhetorical answers instead of giving some commitments."
On Monday, Facebook announced a new plan to help users learn more about the COVID-19 vaccines and where to get them.
The company is introducing labels containing "credible information" from the World Health Organization that will be tacked onto Facebook and Instagram posts that discuss the vaccine. The label is rolling out in six languages including English and Spanish.
- How Social Media Moderation Might Be Legislated - dot.LA ›
- Even as Social Sites Crack Down, Misinformation Spreads - dot.LA ›
- Biden Inauguration Social Posts Are Full of Misinformation - dot.LA ›
- Facebook Won't Take Down Misleading Political Ads - The New ... ›
- Report: TikTok Fails to Police Political Ads on Its Platform - dot.LA ›
- Tech Policy Storylines in 2022 - dot.LA ›
'A Very Precarious Situation': How Trump's Order Could Impact The Fates of Snap, TikTok and Grindr
An executive order that could enable federal regulators to punish social media companies for how they moderate content on their sites would have far-reaching impacts, especially on smaller companies with an online presence, including TikTok, Snap Inc. and Grindr, that lack the budgets to moderate every single message or post on their apps.
President Donald Trump threatened such a change via executive order after Twitter fact checked tweets that spread misinformation related to voting earlier this week. Rather than edit the tweet or block it, the social media company inserted a line that said "get the facts about mail-in ballots."
At the heart of the new executive order lies a complex 1996 law known as Section 230 of the Communications Decency Act that has been broadly interpreted by the courts over the years as shielding internet sites and apps from being financially liable for what user tweets, posts or generally publishes on their platforms.
It has also protected websites from being held liable for moderating content they see as obscene, violent or otherwise objectionable.
"Section 230 is one of the building blocks for free speech online," Emma Llanos, director of the free expression project at Center for Democracy & Technology, told dot.LA. "It has been absolutely essential to the creation of very large platforms and very small platforms, to the creation of all kinds of online communities, and to (enabling) different approaches to content moderation."
The section also gives people who operate those online services the legal certainty that they won't end up in court fighting about whether they appropriately took down a specific post out of the tens of thousands on their site, or if they've missed moderating something. "It gives some breathing room," Llanos said.
Rather than battle an endless number of lawsuits, such companies could either decide to not moderate content at all, or go out of business, experts say.
Santa Monica-based Snap Inc. has relied on Section 230 in numerous court cases, including one involving its speed filter where victims claimed that the company encouraged reckless driving by providing a speed filter that gauges and notes a driver's speed at the time of its use. Other cases have involved the use of Snapchats for harassment between users.
"At the core, those claims try to hold Snapchat accountable over how Snapchatters misuse their tools," said Eric Goldman, a professor at Santa Clara University School of Law and expert on Section 230. "A reduction of 230 will put Snapchat in a very precarious situation. What do they do if they can't rely on this legal immunity?"
Should such immunity change, Snapchat and relativity smaller companies like it — by far, the majority of social media companies that aren't Facebook or Google — could be forced from their industries under a hail of lawsuits, experts told dot.LA on Thursday.
"That's a very likely scenario for companies like Snapchat," Goldman said. "Snapchat can't police its premises well enough to prevent people from doing bad things. You can't have a Snapchat conversation where both sides of the conversation is reviewed by Snap employees before it's delivered a) that's a privacy violation and b) it's not instantaneous.
"What's Snapchat at that point, without any chatting?"
Snap Inc. declined to comment Thursday.
In another high-profile case, a man used the dating app Grindr to terrorize his ex-boyfriend, by creating fake profiles that impersonated him, with vulgar screen names and false information. The imposter directed hundreds of potential suitors to his ex's apartment or workplace on a daily basis.
West Hollywood-based Grindr was protected from financial liability in that case by Section 230, even if in the end, they could or should have done more to ensure the harassing content was flagged and removed, Goldman said.
Meanwhile, Culver City-based TikTok's use of Section 230 is more opaque because it's owned by Beijing-based technology firm ByteDance Ltd. It's unclear how much their leadership is swayed by the liability protections offered by Section 230 or by Chinese internet liability laws and cooperation with the Chinese government, said Eric Goldman, a professor at Santa Clara University School of Law and expert on Section 230.
A TikTok spokesperson did not respond to repeated requests Thursday for comment.
TikTok said last month in a blog post that it plans to open a "transparency center" in Los Angeles that would try to provide outside experts a view into how TikTok's teams moderate content on its platform and give insight into its moderation systems, processes and policies. The company also created a committee of outside experts to advise it on its content moderation.
Dan Schnur, a political strategist and professor, is a member of that advisory committee. He told dot.LA that the core of the outside group's efforts is to figure out how to protect young people from dangerous interactions online. Schnur emphasized that he does not speak for TikTok.
"Even though the president's executive order seems to be motivated by concerns about political speech, it appears that this would also greatly impact a platform's ability to monitor any types of conversation," Schnur said. "TikTok has devoted a great deal of time and attention to making sure that young people are not exposed to information that would compromise their safety. My personal worry is that if a social media platform didn't have the ability to label political content, it'd be even more difficult to protect children from potentially dangerous interactions online."
In mid-May, a coalition of child privacy rights groups filed a complaint against TikTok with the FTC, alleging that the platform is violating terms it previously agreed to when it was fined $5.7 million in early 2019 for violating the Children's Online Privacy Protection Act (COPPA). Numerous U.S. Congressmen on both sides of the aisle have called for an investigation, including 14 House Democrats who sent a letter to the FTC chairman on Thursday.
Sen. Ron Wyden, D-Oregon, who co-authored Section 230, said in a statement Thursday that President Trump's planned executive order is illegal and an effort to bully companies into giving Trump favorable treatment. He said eroding such protections will only make online content more likely to be false and dangerous. Section 230 also doesn't prevent internet companies from moderating offensive or false content, nor does it change the First Amendment of the Constitution.
"Trump is desperately trying to steal for himself the power of the courts and Congress to rewrite decades of settled law around Section 230," Wyden said. "All for the ability to spread unfiltered lies."
__
Reporter Sam Blake contributed to this story. Do you have a story that needs to be told? My DMs are open on Twitter @latams. You can also email me, or ask for my Signal.
- How Trump's Order Could Impact The Fates of Snap, TikTok and ... ›
- Snap Removes Trump's Account From its Discover Platform - dot.LA ›
- Snap Releases Voting Apps - dot.LA ›
- TikTok Threatens Legal Action Over Trumps Executive Order - dot.LA ›
- Trump Gives TikTok and Oracle Deal His ‘Blessing' - dot.LA ›
- Snapchat Locks Trump's Account After Riot at Capitol - dot.LA ›
- How Social Media Moderation Might Be Legislated - dot.LA ›
- Lawmakers Take Aim at Algorithms 'at Odds with Democracy' - dot.LA ›
- Lawmakers Take Aim at Algorithms 'at Odds with Democracy' - dot.LA ›
- What is Section 230? - dot.LA ›
- 47 U.S. Code § 230 - Protection for private blocking and screening ... ›
- Community guidelines | Grindr ›
- Statement on TikTok's content moderation and data security practices ›
- Snapchat emphasizes human content moderation in app redesign ... ›
- Snap Inc. ›
- Section 230 of the Communications Decency Act | Electronic ... ›