Header Ads

ad

By Bridling Section 230, SCOTUS Can Finally Do What Congress Won’t: Rein In Big Tech

A ruling that returns to Section 230’s intent would respect the public’s interest while immunizing platforms to moderate nasty content. 



The Supreme Court announced this week that it will finally attempt to do what Congress has not: bring clarity to Section 230, the law that protects internet platforms from liability for content published by others. While Section 230 operates across the internet, it is also, for Big Tech, a valuable financial subsidy.

Over nearly 30 years since its passage, lower courts have stretched what the plain text suggests is a limited, fairly porous liability into a bulletproof one, which immunizes platforms for far more than just the messages they deliver. Product liability claims for child sex crimes committed using Instagram fall under Section 230 immunity, as do all manner of the platforms’ own moderation — that is, what they do to the content posted by users.

The question before the court — and, indeed, the one currently bedeviling policymakers — is how far Section 230 immunity ultimately goes. The vehicle for deciding this comes in the form of Gonzalez v. Google, a case filed by the family of a 23-year-old American woman killed in an ISIS attack on a Paris cafe in 2015. The family of Nohemi Gonzalez argues that, under the Antiterrorism Act, Google (which owns YouTube) aided ISIS through YouTube videos — more specifically, through its recommendation algorithms, which placed ISIS videos into the feeds of users.

The divided panel of the U.S. Court of Appeals for the 9th Circuit ultimately ruled that Section 230 protects the ability of platform algorithms to recommend someone else’s content — though the majority agreed with the dissent’s observation of “a rising chorus of judicial voices cautioning against an overbroad reading of the scope of Section 230 immunity,” and noted that “the Internet has grown into a sophisticated and powerful global engine the drafters of §230 could not have foreseen.”

Moreover, in her concurrence, Judge Marsha Berzon concluded that the algorithmic amplification, which now powers the business model of the social media platforms “well outside the scope of traditional publication,” constituted the platforms “engaging in its own communications with users.” 

“Nothing in the history of Section 230 supports a reading of the statute so expansive as to reach these website-generated messages and functions,” she wrote. Despite this, Berzon noted, the court was bound to years of overly broad precedent that it could not undo.

It is the Supreme Court, however, that can untangle the years of lower courts misreading the statute, and this will be the question sitting before the justices in the coming months. How they rule has the power to curtail the way in which the biggest corporations in the world curate both the modern public square and a significant portion of the digital economy.

SCOTUS Can Update 230 for the Modern Internet

The court agreeing to take this case is a recognition of two things. First, there is the possibility of something amiss with the statutory interpretation of Section 230 as it currently stands. As I and many others have argued, Congress intended Big Tech’s liability shield to be limited and delineated. Lower courts have rendered it expansive and bulletproof, setting bad precedent that continues to bind lower courts even when they raise their own doubts. That bad precedent must be undone. 

The legislative history is clear that Section 230 had two ultimate aims: to make the internet safe for kids and to give the platforms the necessary incentive to moderate smut while allowing “a true diversity of political discourse” to flourish. Former Rep. Chris Cox, one of the authors of Section 230, is now paid by the tech companies to argue that he meant the provision to be the seminal charter of online internet immunity, but even he cannot hide from his own words nearly 30 years ago. 

Speaking in favor of his amendment, the Online Family Empowerment Act (later known as Section 230), Cox argued, “We want to encourage [internet services] … to everything possible for us, the customer, to help us control, at the portals of our computer, at the front door of our house, what comes in and what our children see.” 

Moreover, a close textual analysis of Section 230 does not support an expansive aim, as Michigan State law professor Adam Candeub has persuasively argued:

Notice what section 230’s text does not do: give platforms protection for content moderation. … That would include ‘disinformation,’ ‘hate speech,’ ‘misgendering,’ religious hatred,’ or for that matter the traffic prioritization that platforms perform to give people content they want.

But the court’s decision to take this case is also a recognition of the instrumental role that the major technology platforms now play in the day-to-day lives of citizens. We have a public square and a large portion of the economy mediated by technology. Yet the rules that govern it are entirely privatized, subjected to court distortion, profit motive, individual commoditization, and corporate whim. The values of free speech and a vigorous exchange of ideas and viewpoints are an afterthought.

Justice Clarence Thomas has argued that the role of major internet platforms is so critical in modern society that it now reaches the level of common carrier regulation. With regard to Section 230, he has suggested that the court should consider “whether the text of this increasingly important statute aligns with the current state of immunity enjoyed by the Internet platforms.”

Despite the hair-raising rhetoric from the tech press and Big Tech’s network of paid policy groups about any changes to Section 230 heralding “the end of the internet,” it is not unreasonable that Congress or the courts would revisit a statute that now applies to platforms and, indeed, a comprehensive internet that didn’t even exist when the original law was passed. 

Clarification is especially warranted as the platforms have made it a pattern to try to have their cake and eat it, too — that is, argue the contradiction that all their content moderation is not their speech and thus Section 230 protected, while at the same time arguing that their content moderation is also their speech, and thus protected by the First Amendment. This is facially ridiculous and something neither the courts nor Congress should let stand.

Yet the Supreme Court, of course, could very well get it wrong. A sweeping ruling that rejects any sort of nuance and instead determines that every moderation decision by the tech companies is protected by Section 230, or that every moderation decision is protected by the First Amendment, would be a huge setback for state attempts to step in and reform the tech companies. State laws, like the one out of Texas recently upheld by the 5th Circuit Court of Appeals, would be irreparably harmed.

Rather, a limited, case-by-case ruling that takes into account the original intent of Section 230, along with the technology that has developed since its passage, would respect both the public’s interest in preserving the right to have a plurality of voices while immunizing the platforms to moderate for the actual nasty content categorized in the statute itself.

In the absence of any recent congressional action on the matter, the Supreme Court has taken for itself the opportunity to clarify the rules surrounding engagement in the modern public square and the digital economy. For those interested in an outcome that both incentivizes the platforms to moderate the true harm that can flourish on the internet while allowing a diversity of speech — that is, the original intent of Section 230 — a decision that returns the statute to its original confines would be welcome.