I say “sort of” because today’s ruling doesn’t specifically address social media. It was about public access television in New York City. The question: Was a private company that had been designated by NYC to operate public access stations a de facto “state actor,” bound by the First Amendment the same way the government itself is not to discriminate based on viewpoint? On rare occasions in the past, the Court has treated private entities as arms of the state based on some unusual relationship they have with the government or some unusual role they happen to play in the community. In those cases, the private entity is treated legally as a sort of state “deputy,” held to the same rules government is. That means, unlike other private actors, it can’t ban someone just because it doesn’t like what that person has to say.
Which is a bigggggg deal potentially in the age of social media. Some critics who resent the capriciousness with which platforms like Twitter and Facebook apply their terms of service to “offensive” content claim that social media behemoths should be treated as “state actors” too. The classic template of a space set aside for public dialogue is the public square, operated by the state itself, they reason, but in 2019 the public square is a virtual space operated by Big Tech. Should a virtual “public square,” open to the public for public debate, retain its legal identity as a “private forum” just because its proprietor is a private entity?
The tech industry was watching this ruling closely, with trade groups and nonprofits having filed amicus briefs urging the Court to rule narrowly if it held that a private company operating public access stations was a state actor for First Amendment purposes. If it didn’t, it would stand on the brink of revolutionizing social media’s obligations to its users:
If the Supreme Court were to decide that private companies can face First Amendment liability as state actors because they provide a forum for public speech, the Internet Association warned, “the Internet as we know it will become less attractive, less safe and less welcoming to the average user.” Search engines wouldn’t be able to exercise editorial judgment. YouTube couldn’t take down videos depicting, say, animal cruelty or hate speech. Social media sites couldn’t block offensive content…
According to EFF, platforms open to the public but owned and operated by private companies simply are not public forums for First Amendment purposes. “There can be no ‘public forum,’ as that term of art is used with respect to this court’s public forum doctrine, without significant involvement of the government itself,” its amicus brief said. EFF cautioned the justices against the logical fallacy that private companies can be deemed state actors because they operate public forums, which, by definition, can only be government-controlled.
Today the Court handed down its decision: The company operating the public access stations isn’t a state actor and therefore isn’t barred from screening content based on viewpoint by the First Amendment. The case went 5-4, with all five conservatives in the majority. Writing for the Court was one Brett Kavanaugh, who reasoned that a private entity doesn’t become a state actor unless it performs a function that has traditionally — and exclusively — been performed by government. And it simply ain’t the case that “public forums” have always been public.
The key bit for social media:
Although the ruling went 5-4, with the four liberals in dissent, it’s unclear if Ginsburg et al. disagree with any part of the excerpts here. Their beef with the majority is that the private company in this case wasn’t just performing a role traditionally performed by government; it had been appointed by the City itself to do so, creating an agency relationship. Twitter and Facebook have no such relationship with the federal government, so this one might have gone 9-0 had it dealt squarely with social media platforms.
Not a major problem for critics of Big Tech, though. This argument, that tech companies are de facto state actors under the First Amendment, has always been a secondary longshot line of attack. Their main argument is that Section 230 of the Communications Decency Act requires a company to be “neutral” in moderating content in order to enjoy immunity from liability for the things posted on its website. The more bias the company shows while screening content, the theory goes, the less “neutral” it is, which supposedly transforms it from a “platform” into a “publisher.” Once that happens, it loses its immunity. I hear this theory all the time from righties, from Ted Cruz to random people on Twitter grumbling about the latest ban, but it just ain’t true. This primer on Section 230 from EFF explains that the statute not only doesn’t punish platforms for moderating content, it encourages them to do so. If you want to change that, change the law. Don’t misread it.