Supreme Court’s message in First Amendment case: Technology can control social media at will
The Supreme Court remanded two cases challenging social media regulation laws in Florida and Texas to lower courts.

The following essay is The Conversation is an online publication covering the latest research.
The U.S. Supreme Court has sent back to a lower court a decision on whether states can block social media companies like Facebook and X, the former name of Twitter, from regulating and controlling what users can post on their platforms.
The Florida and Texas laws sought to impose restrictions on social media platforms’ internal policies and algorithms in ways that would affect which posts are promoted and spread widely, and which posts are less prominent or even removed.
Supporting science journalism
If you enjoyed this article, please support our award-winning journalism. Subscribe. By purchasing a subscription, you help ensure a future of influential stories about the discoveries and ideas shaping the world today.
In a unanimous decision issued on July 1, 2024, the Supreme Court remanded two cases, Moody v. NetChoice and NetChoice v. Paxton, to the Eleventh and Fifth Circuits, respectively. The Court scolded the lower courts for failing to adequately consider the effect of the law’s application and warned them to consider the Constitution’s limitations on government interference with private expression.
Contrasting views on social media sites
During arguments in court in February 2024, the two sides laid out competing visions for how social media fits into the overwhelming deluge of information that characterizes the modern digital world.
The states said the platforms were merely conduits of communication, or “speech hosts,” similar to traditional telephone companies, which were obligated to relay all phone calls and prohibited from discriminating against their users. The states argued that the platforms must relay all posts without discriminating based on what users say.
The states argued that content moderation rules imposed by social media companies are not examples of speech the platforms themselves choose not to make or not make. Rather, the rules influence the platforms’ behavior and cause them to censor certain views by deciding who is allowed to speak on what topics, which falls outside the protections of the First Amendment.
In contrast, the social media platforms, represented by tech industry trade group NetChoice, argued that the platform guidelines about what is acceptable on their sites are protected by the First Amendment, which guarantees speech free from government interference. The companies say their platforms are not public forums that could be subject to government regulation, but rather private services that can exercise their own editorial judgment about what does and doesn’t appear on their sites.
They argued that their policies are part of their speech and that under their First Amendment rights they should have the right to set and enforce guidelines regarding acceptable speech on their platforms.
Supreme Court Reorganization
The parties to the litigation — NetChoice, Texas, and Florida — all focused their arguments on the law’s effect on platforms’ content moderation policies, specifically whether the platforms engaged in protected speech. The Eleventh Circuit Court of Appeals upheld the lower court’s preliminary injunction against the Florida law, holding that the platforms’ content moderation policies were speech and that the Florida law was unconstitutional.
The Fifth Circuit reached the opposite conclusion, finding that the platforms were not engaging in speech, but rather that the platforms’ algorithms controlled the platforms’ conduct that was not protected by the First Amendment. The Fifth Circuit found that the conduct amounted to censorship, and vacated the lower court’s injunction against the Texas law.
But the Supreme Court reframed the inquiry. It noted that the lower courts had not considered the full scope of activity covered by the law. Thus, while the First Amendment inquiry was appropriate, the lower court’s decision and the parties’ arguments were incomplete. The Court added that neither the parties nor the lower courts had conducted a thorough analysis of whether and how the state law affected other elements of the platform product, such as Facebook’s direct messaging application, or whether the law had any effect on email providers or online marketplaces.
The Supreme Court has directed lower courts to conduct a more rigorous analysis of the law and its effects and has provided some guidelines.
First Amendment Principles
The court found that content moderation policies reflect the platforms’ editorial choices that are protected by the Constitution, at least with respect to what the court called the law’s “heartland application,” such as Facebook’s news feed and YouTube’s homepage.
The Supreme Court asked lower courts to consider two core principles of the First Amendment: First, the amendment protects speakers from being compelled to convey messages they wish to eliminate. Editorial discretion by entities that edit and moderate the speech of others, including social media companies, is conduct protected by the First Amendment.
Another principle is that the amendment prohibits government control of private speech, even for the purpose of balancing the marketplace of ideas. Neither state nor federal governments may manipulate that marketplace in order to present a more balanced view.
The Court also confirmed that these principles apply to digital media as well as traditional media.
In a 96-page opinion, Justice Elena Kagan wrote that “the First Amendment does not apply where social media is concerned.” For now, it appears that social media platforms will continue to moderate content.
This article was originally published on conversation. read Original Article.