Last week, the Supreme Court heard oral arguments for two cases, Gonzalez v. Google and Twitter v. Taamneh. Both have the potential to upheave Section 230 of the Communications Decency Act, which protects websites from being held liable for user-generated content. With the exception of copyright violations and content that breaks federal criminal law, Section 230 prevents companies from being sued over users’ posts. And since the act was passed in 1996, Section 230 has granted social media companies the ability to host, organize and recommend content.
Up until now, lower courts have consistently upheld Section 230—which makes these Supreme Court cases unusual.
“The speculation was that the Supreme Court only takes up a case of statutory interpretation when they want to change how the courts are interpreting it,” says Emma Llansó, director of the Center for Democracy and Technology’s Free Expression Project. “Otherwise, there's really no problem for them to solve.”
Regardless of why the justices selected these cases, Llansó says their questioning last week reveal that none of them have a definitive idea of exactly how they think the statute should be interpreted. While Justice Clarence Thomas has written many statements criticizing the law’s wide scope, the oral arguments were more of a fact-finding mission for the justices to learn different ideological perspectives on the topic.
According to Llansó, the justices were interested in determining what kinds of algorithmic processes might be protected under Section 230. While a so-called “neutral” algorithm might recommend either benign or harmful results based on user input, Llansó says there were multiple questions about limiting “discriminatory” algorithms, such as an ad-targeting system that shows housing ads based on race. Currently, Section 230 prevents companies from being sued over either algorithm. But Llansó believes the Court is interested in limiting what types of algorithmic recommendations are covered by Section 230.
Corbin Barthold, internet policy counsel at TechFreedom, says the justices might simply have wanted to hear their first case regarding Section 230. However, he believes the oral arguments have revealed just how tricky it would be to differentiate algorithmic recommendations from the rest of Section 230.
“What we discovered was maybe there was a certain belief that recommendations were a bite-sized piece of Section 230 that could be clipped off,” Barthold says. “Once we got it all briefed and had the oral argument, we found that it's not that simple.”
According to Barthold, the Gonzalez v. Google case fails to create a clear distinction between what kind of content should be protected by Section 230 and what should be altered. This, then, leaves the justices with two options: maintain the status quo of protecting recommendations or completely upend algorithmic recommendations.
“If those are the only two options, it seems very likely that the justices will opt for option one, which is don't disrupt things,” Barthold says.
The Supreme Court’s decision won’t be announced until this summer, and it is currently unclear if they will decide to change Section 230. Barthold says doing so would lead social media companies to go further moderate content and restrict users’ speech, adding “everybody would be unhappy with the amount of extra content that would get taken down.”
Llansó says any changes would result in years of litigation as lower courts figure out how to apply the Supreme Court’s interpretation. Still, while large corporations, like Meta and Google, have the financial backing to weather these lawsuits, smaller social media startups don’t. Considering that investors must weigh the potential legal risks before financing a company, Llansó says funding in the social media space would likely decrease.
“A smaller service actually risks being sued out of business and into oblivion,” Llansó says.
Even if these two cases don’t result in any changes to Section 230, Llansó expects to see future cases parsing Section 230—ones that are perhaps better suited to address the issues of algorithmic discrimination that the justices seem to be interested in.
“This will not be the last section 230 case that we see before the court,” Llansó says. “They might be on the lookout for cases that bring more clearly and directly some of the questions they were looking at.”
- Twitter V Taamneh - dot.LA ›
- Weekly Tech Roundup: The SCOTUS Case That Could Fundamentally Transform the Way Content Is Regulated ›
- A Lawsuit Blames ‘Defective’ TikTok Algorithm for Children’s Deaths ›
- How Two Upcoming SCOTUS Rulings Could Reshape the Internet ›