How Two Upcoming SCOTUS Rulings Could Reshape the Internet

This is the web version of dot.LAās daily newsletter. Sign up to get the latest news on Southern Californiaās tech, startup and venture capital scene.
Two cases before the Supreme Court this week will reconsider Section 230 of the Communications Decency Act, and could have a broad and far-reaching impact on internet content.
What is section 230?
The Communications Decency Act of 1996 established a number of legal rules and precedents that continue to inform how companies and individuals can permissibly interact online to this day. Essentially, section 230 says that a website ā unlike a conventional publisher ā canāt be held liable or responsible for anything potentially offensive or illegal thatās posted by an unaffiliated user or a member of the public.
That includes whatās known as āGood Samaritanā protection for companies that own websites or internet platforms. So long as they make a good faith effort to remove obscene, violent, harassing or otherwise objectionable material, internet companies canāt be sued over it as part of a civil proceeding.
Though Section 230 is fairly far-reaching in its application, there have always been a few key exceptions, particularly regarding pre-existing federal and state laws. Websites are still required to remove material that infringes copyrights, material that violates sex trafficking laws, and so on, and theyāre not granted total immunity from state criminal laws that may be inconsistent with the federal policy.
In the wake of a number of livestreamed mass shootings, and growing concerns about the spread of election and medicalĀ misinformation on internet forums, the lawās far-reaching protections were called into question by several politicians on both sides of the aisle, some of whom suggested potentially amending Section 230 or eliminating it altogether. In his 2020 run for president, Texas Representative Beto OāRourke campaigned specifically on amending the law to force internet companies to be more proactive in removing hate speech.
Nonetheless, these protections are widely credited with allowing the modern internet to flourish, enabling everything from social networks to search engines to host content, develop recommendation algorithms, and otherwise serve as a hub for public communication without constantly worrying about being sued into oblivion.
So whatās going on with these two court cases?
The cases in front of the Supreme Court this week will take a fresh look at Section 230 protections by considering a relatively extreme edge case scenario: posts made by the militant group the Islamic State, or ISIS, which the United Nations has labeled a terrorist organization.
The first case, Gonzalez v. Google, concerns a series of coordinated ISIS attacks around Paris in 2015 that killed 130 people and wounded 500 more.
23-year-old Nohemi Gonzalez was killed in the 2015 attack at a Parisian bistro. Her family has argued that Google aided ISIS recruitment efforts by allowing the group to post violence-inciting videos to YouTube and specifically featuring ISIS clips in its recommendation algorithms. The Gonzales v. Google decision will deal largely with the recommendation process, and whether internet companies can be held liable for making targeted and unsolicited recommendations of material posted by an outside party. If Google curates the YouTube library and engages in other traditional editorial functions, does that make the company technically a publisher as opposed to simply a āservice providerā? (This debate is often summarized as āplatform vs. publisher,ā but as TechDirt helpfully points out, the term āplatformā doesnāt actually appear in Section 230. Internet companies are referred to as āservice providers,ā a key distinction.)
The second case, Twitter v. Taamneh, centers around a New Yearās Day attack by the group two years later on an Istanbul nightclub, which killed 39. Filed by the family of Nawras Alassaf, a Jordanian who was killed in the 2017 ISIS attack on Istanbulās Reina nightclub, the lawsuit argues that Twitter and other tech companies knew that their platforms played a significant role in ISIS recruitment efforts, yet did not take aggressive steps to actually remove the organization and its membership from the service.
Why do these cases matter so much?
Both of these cases raise tough and even ambiguous questions about Section 230 and how its protections are specifically applied. Gonzalez v. Google in particular could prove majorly consequential because of the central importance of ārecommendation algorithmsā to the entire megastructure of the internet.
Algorithmic recommendation doesnāt just power things like YouTubeās ārelated videos,ā itās also responsible for relevant results in Googleās search engine. A particularly extreme ruling in the Gonzalez case could theoretically find Google liable for objectionable content on any outside website to which it directs users. The lawsuit itself attempts to allay these concerns by differentiating between YouTube recommendations and Google search results. Nonetheless, the distinction highlights just how far-reaching and significant these rulings could be.
The Twitter ruling is likely narrower, and will attempt to establish what steps a website or platform needs to take in order to āaggressivelyā combat objectionable content, such as terrorism recruitment. There are a number of open questions here to potentially confront, in terms of what actually constitutes āaiding and abettingā a terrorist organization.
For example, if a court agrees that Twitter provided a gathering place or recruitment tool for ISIS, can the company be held responsible for terrorist acts that werenāt specifically plotted or discussed there?
If ISIS used Twitter to recruit, can Twitter be sued for a later ISIS attack, even if it wasnāt specifically planned out using Twitter, and the company had no way of knowing what they were even working on?
The Biden administration has argued that Twitter could theoretically be held liable in certain circumstances, even if the company didnāt specifically know about the terrorist attack being plotted, and did not host specific discussions in support of that attack. However, the Department of Justice added that it feels the plaintiffs in Twitter v. Taamneh have not gone far enough in demonstrating the companyās specific support for ISIS beyond the āgeneralized supportā they offer to all users. The Taamneh family argues that the Anti-Terrorism Act was written to provide plaintiffs with āthe broadest possible basisā to sue companies that assist terrorist groups.
Twitter, for their part, has suggested that ā by reconsidering the definition of āaiding-and-abettingā terrorism ā the case poses a particularly slippery slope, which could result in even aid organizations and NGOs being liable if their assistance or services accidentally ends up aiding ISIS operations.
Looking Ahead
Even if these two cases donāt result in a major change to legal precedent regarding Section 230, there are more potential challenges to the law coming in the months and years ahead. In 2022, a surprise ruling by the Fifth Circuit Court of Appeals banned apps and website from moderating content based on āviewpoint,ā which could have major implications for free speech and content curation moving forward. There are no easy answers when it comes to communication and publishing online. Which is why weāre probably going to be having these kinds of arguments for a long time.
- Jury Rules in Favor of FIGS, Tosses Out False Advertising Lawsuit āŗ
- How Social Media Moderation Might Be Legislated After the Capitol Attack āŗ
- A Lawsuit Blames āDefectiveā TikTok Algorithm for Childrenās Deaths āŗ
- Weekly Tech Roundup: The SCOTUS Case That Could Fundamentally Transform the Way Content Is Regulated āŗ
- SCOTUS's Ruling on Section 230 Could Alter Social Media - dot.LA āŗ