The U.S. Supreme Court ruled Wednesday that a Pennsylvania high school violated a student's First Amendment rights when it punished her for posting a profane message on Snapchat expressing her frustration about not making the varsity cheerleading team.
The major ruling on student free speech rights was an 8 to 1 decision, with Justice Clarence Thomas dissenting. While the court said the punishment imposed by the school against the cheerleader was too severe, it said that schools may discipline students for off-campus speech in some cases.
The case stems from a Snapchat post made by Brandi Levy, who was then 14, that was a picture of her and her friend pointing the middle finger at the camera with the caption "F*** school, F*** softball, F*** cheer, F*** everything." The Snap, shared with 250 friends, was posted on a Saturday from a convenience store.
A fellow junior varsity teammate saw the post, took a screenshot of it and it was shared with a coach. The school said the post was disruptive to cheerleader morale and suspended Levy from the team for the upcoming year. Levy sued the school district, saying the punishment violated her free speech rights.
"It might be tempting to dismiss [Levy's] words as unworthy of the robust First Amendment protections discussed herein," Justice Stephen Breyer wrote in the majority opinion. "But sometimes it is necessary to protect the superfluous in order to preserve the necessary."
Santa Monica-based Snap Inc. did not immediately respond to a request for comment. But the case highlights how much social media has become embedded in the fabric of life for teenagers.
The court said that circumstances that allow a school to regulate student speech, even if made off campus, include serious or severe bullying or harassment targeting individuals, threats aimed at teachers or other students, failure to follow rules concerning lessons, writing papers, computer use or participation in other online school activities and breaches of school security devices.
However the court noted it is not a finite list as it could vary based on a student's age, the nature of the school's off-campus activity or the impact upon the school itself.
"We do not now set forth a broad, highly general First Amendment rule stating just what counts as 'off campus' speech and whether or how ordinary First Amendment standards must give way off campus to a school's special need to prevent … substantial disruption of learning-related activities or the protection of those who make up a school community," Breyer wrote.
The court said Levy's posts are entitled to First Amendment protection and that her criticism and Levy's message did not involve features that "would place it outside the First Amendment's ordinary protection."
"[Levy's] posts, while crude, did not amount to fighting words," Breyer wrote.
Because the posts were made outside of school hours from an off-campus location, the school's ability to regulate the speech was also diminished. The court noted Levy did not identify the school nor target any member of the school with vulgar or abusive language. She also sent the message to "an audience consisting of her private circle of Snapchat friends."
The American Civil Liberties Union, which represented Levy, said on Twitter following the ruling, "The Supreme Court has affirmed what we've said all along — students have greater free speech rights when they are out of school and on their own time."
In order to regulate student speech on campus, the court has ruled that schools must show the activity is disruptive to school activities. Breyer said there is "little to suggest a substantial interference in, or disruption of, the schools' efforts to maintain cohesion on the school cheerleading squad."
A 14-year-old Pennsylvania high school student who took to Snapchat after not getting a spot on the varsity cheerleading team is at the center of a case now being considered by the U.S. Supreme Court that will test the limits of schools' ability to police speech on social media.
In 2017, Brandi Levy, now a college student, shared an image with her 250 Snap friends. It was a picture of her and her friend pointing the middle finger at the camera with the caption "F*** school, F*** softball, F*** cheer, F*** everything."
A fellow junior varsity teammate saw the post, took a screenshot of it and shared it with a coach. The school said the posts were disruptive to cheerleader morale and suspended Levy from the team for the rest of the year to "avoid chaos" and maintain a "teamlike environment."
In the balance is the issue of Levy's speech and whether that of millions of public school students online is protected. The court is expected to rule later this month. It's a case that could transform how school districts monitor students' online speech, including on Santa Monica-based Snap Inc's apps.
Social media has become embedded in the fabric of life for teenagers. Once off-handed comments now live online and can be shared.
The issue is especially tricky for school officials who are trying to balance the use of social media as an early warning system for potential violence, bullying or even self harm.
The last major ruling on student speech came in 1969 when the court held that students have free speech rights at school, unless officials find it will cause "substantial disruption."
Rachel Levinson-Waldman, deputy director of the Brennan Center's Liberty & National Security Program at The Brennan Center for Justice, said based on the justices' comments during oral arguments, she expects the court to skirt some of the broader free speech questions.
Instead, she thinks it's likely the justices will issue a narrow ruling. For example, because the case involves a student athlete, the ruling might just apply to students who voluntarily participate in an extracurricular activity if that speech is about the activity.
Still, she said statements students make on social media when they're off-campus should be protected by the First Amendment. The nonprofit center filed an amicus brief supporting Levy along with Equality California, the Anti-Defamation League and others, joining more than 100 other organizations in supporting the teen. Levy is being represented by the American Civil Liberties Union.
"It's going to have some impact on student speech going forward," Levinson-Waldman said.
The case brings to the fore some of the more difficult questions administrators, parents and students are dealing with in the online world.
If the court allows for monitoring of speech off campus, it could open the door for districts to use more social media monitoring software like Geo Listening, DigitalStakeout and Social Sentinel.
Companies have been trying to fill the gap, marketing social media monitoring services as tools that can prevent self-harm and bullying, and in some cases, mass violence.
Research by the Brennan Center of a database of government purchase orders found that 63 school districts across the country purchased social media monitoring software in 2018, up from six in 2013. Levinson-Waldman noted that the data does not capture all of the districts that may use this software.
The Center has found the technology is "largely unproven," and raises questions about privacy, free expression and other civil and human rights concerns. Also problematic, some words that might get flagged by the software might have different meanings in different cultures or contexts. That's particularly a problem for students of color, religious minorities and students with disabilities who are disciplined at disproportionately higher rates than their peers.
At the nation's second-largest school district — where students have posted shooting threats or other menacing warnings online — Los Angeles Unified School District officials said they address cyberthreats head on and don't use software to monitor their half million students on social media.
"While we can exercise our authority over out-of-school behaviors that directly and negatively impact the school, such as a threat, we are educators by trade and education is our best intervention," said an LAUSD spokesperson in an emailed statement.
The Glendale Unified School District was at the center of the issue in 2013 after it signed a contract with California-based Geo Listening to monitor students' public posts, sending daily reports to district officials when students mentioned using drugs or hurting themselves or others. The program was prompted by the suicides of two students the previous year after they were bullied online.
"We think it's been working very well," then-Glendale Unified Superintendent Dick Sheehan told the L.A. Times. "It's designed around student safety and making sure kids are protected."
The district renewed the contract in 2015. A district spokesperson said the district hasn't used the software in "several years," but could not say why the contract was terminated.
One of the issues in Levy's case is that she was not on school property when she posted the Snapchat. Rather, she posted it on a Saturday from a convenience store. So the court must decide whether schools can punish students for speech that occurs online and off-campus that may cause disruption inside schools.
The school district's attorney argued that the internet's "ubiquity" and potential for mass dissemination and permanence make the students' location "irrelevant," while ACLU attorneys representing Levy argued it would dramatically expand the disciplinary reach of schools.
During oral arguments, some of the justices seemed to indicate that the punishment did not fit the crime in this case and questioned what kind of speech would be determined to be disruptive — every curse word?
Justice Clarence Thomas acknowledged the difficulty in determining where the speech took place and whether it took place under the school's supervision.
"Aren't we at a point that if it's on social media, where you posted it on social media doesn't really matter?," Thomas said.
An executive order that could enable federal regulators to punish social media companies for how they moderate content on their sites would have far-reaching impacts, especially on smaller companies with an online presence, including TikTok, Snap Inc. and Grindr, that lack the budgets to moderate every single message or post on their apps.
President Donald Trump threatened such a change via executive order after Twitter fact checked tweets that spread misinformation related to voting earlier this week. Rather than edit the tweet or block it, the social media company inserted a line that said "get the facts about mail-in ballots."
At the heart of the new executive order lies a complex 1996 law known as Section 230 of the Communications Decency Act that has been broadly interpreted by the courts over the years as shielding internet sites and apps from being financially liable for what user tweets, posts or generally publishes on their platforms.
It has also protected websites from being held liable for moderating content they see as obscene, violent or otherwise objectionable.
"Section 230 is one of the building blocks for free speech online," Emma Llanos, director of the free expression project at Center for Democracy & Technology, told dot.LA. "It has been absolutely essential to the creation of very large platforms and very small platforms, to the creation of all kinds of online communities, and to (enabling) different approaches to content moderation."
The section also gives people who operate those online services the legal certainty that they won't end up in court fighting about whether they appropriately took down a specific post out of the tens of thousands on their site, or if they've missed moderating something. "It gives some breathing room," Llanos said.
Rather than battle an endless number of lawsuits, such companies could either decide to not moderate content at all, or go out of business, experts say.
Santa Monica-based Snap Inc. has relied on Section 230 in numerous court cases, including one involving its speed filter where victims claimed that the company encouraged reckless driving by providing a speed filter that gauges and notes a driver's speed at the time of its use. Other cases have involved the use of Snapchats for harassment between users.
"At the core, those claims try to hold Snapchat accountable over how Snapchatters misuse their tools," said Eric Goldman, a professor at Santa Clara University School of Law and expert on Section 230. "A reduction of 230 will put Snapchat in a very precarious situation. What do they do if they can't rely on this legal immunity?"
Should such immunity change, Snapchat and relativity smaller companies like it — by far, the majority of social media companies that aren't Facebook or Google — could be forced from their industries under a hail of lawsuits, experts told dot.LA on Thursday.
"That's a very likely scenario for companies like Snapchat," Goldman said. "Snapchat can't police its premises well enough to prevent people from doing bad things. You can't have a Snapchat conversation where both sides of the conversation is reviewed by Snap employees before it's delivered a) that's a privacy violation and b) it's not instantaneous.
"What's Snapchat at that point, without any chatting?"
Snap Inc. declined to comment Thursday.
In another high-profile case, a man used the dating app Grindr to terrorize his ex-boyfriend, by creating fake profiles that impersonated him, with vulgar screen names and false information. The imposter directed hundreds of potential suitors to his ex's apartment or workplace on a daily basis.
West Hollywood-based Grindr was protected from financial liability in that case by Section 230, even if in the end, they could or should have done more to ensure the harassing content was flagged and removed, Goldman said.
Meanwhile, Culver City-based TikTok's use of Section 230 is more opaque because it's owned by Beijing-based technology firm ByteDance Ltd. It's unclear how much their leadership is swayed by the liability protections offered by Section 230 or by Chinese internet liability laws and cooperation with the Chinese government, said Eric Goldman, a professor at Santa Clara University School of Law and expert on Section 230.
A TikTok spokesperson did not respond to repeated requests Thursday for comment.
TikTok said last month in a blog post that it plans to open a "transparency center" in Los Angeles that would try to provide outside experts a view into how TikTok's teams moderate content on its platform and give insight into its moderation systems, processes and policies. The company also created a committee of outside experts to advise it on its content moderation.
Dan Schnur, a political strategist and professor, is a member of that advisory committee. He told dot.LA that the core of the outside group's efforts is to figure out how to protect young people from dangerous interactions online. Schnur emphasized that he does not speak for TikTok.
"Even though the president's executive order seems to be motivated by concerns about political speech, it appears that this would also greatly impact a platform's ability to monitor any types of conversation," Schnur said. "TikTok has devoted a great deal of time and attention to making sure that young people are not exposed to information that would compromise their safety. My personal worry is that if a social media platform didn't have the ability to label political content, it'd be even more difficult to protect children from potentially dangerous interactions online."
In mid-May, a coalition of child privacy rights groups filed a complaint against TikTok with the FTC, alleging that the platform is violating terms it previously agreed to when it was fined $5.7 million in early 2019 for violating the Children's Online Privacy Protection Act (COPPA). Numerous U.S. Congressmen on both sides of the aisle have called for an investigation, including 14 House Democrats who sent a letter to the FTC chairman on Thursday.
Sen. Ron Wyden, D-Oregon, who co-authored Section 230, said in a statement Thursday that President Trump's planned executive order is illegal and an effort to bully companies into giving Trump favorable treatment. He said eroding such protections will only make online content more likely to be false and dangerous. Section 230 also doesn't prevent internet companies from moderating offensive or false content, nor does it change the First Amendment of the Constitution.
"Trump is desperately trying to steal for himself the power of the courts and Congress to rewrite decades of settled law around Section 230," Wyden said. "All for the ability to spread unfiltered lies."
Reporter Sam Blake contributed to this story. Do you have a story that needs to be told? My DMs are open on Twitter @latams. You can also email me, or ask for my Signal.
- How Trump's Order Could Impact The Fates of Snap, TikTok and ... ›
- Snap Removes Trump's Account From its Discover Platform - dot.LA ›
- Snap Releases Voting Apps - dot.LA ›
- TikTok Threatens Legal Action Over Trumps Executive Order - dot.LA ›
- Trump Gives TikTok and Oracle Deal His ‘Blessing' - dot.LA ›
- Snapchat Locks Trump's Account After Riot at Capitol - dot.LA ›
- How Social Media Moderation Might Be Legislated - dot.LA ›
- Lawmakers Take Aim at Algorithms 'at Odds with Democracy' - dot.LA ›
- Lawmakers Take Aim at Algorithms 'at Odds with Democracy' - dot.LA ›
- What is Section 230? - dot.LA ›
- 47 U.S. Code § 230 - Protection for private blocking and screening ... ›
- Community guidelines | Grindr ›
- Statement on TikTok's content moderation and data security practices ›
- Snapchat emphasizes human content moderation in app redesign ... ›
- Snap Inc. ›
- Section 230 of the Communications Decency Act | Electronic ... ›