This month, the Supreme Court marked a turning point in Internet history. The court agreed to hear Gonzalez v. Google, his first case interpreting Section 230 – a once arcane law that is now widely credited with “creating the internet” and is being debated by politicians on both sides of the aisle.
Section 230 states that online businesses will not be “treated as the publisher” of any content provided by a third party, such as someone who posts on the businesses’ websites. Passed by Congress in 1996 as part of the otherwise ill-fated Communications Decency Act, the law provides some legal immunity to players such as Google, Twitter and Facebook for content shared on their platforms by users.
The law protects companies that provide a platform for the speech of others from the constant threat of libel suits, while allowing them to remove content deemed objectionable. This enabled the robust, often divisive discourse that defines the internet today. What could the Supreme Court’s intervention mean for its future?
The Gonzalez case, currently in court, arose after a young woman, Nohemi Gonzalez, was killed by an Islamic State attack in Paris. His estate and family members allege that Google violated anti-terrorism law by allowing the terrorist organization to post content that furthered its mission on YouTube (which is owned by Google). They also claim that Google’s algorithms promoted the Islamic State by recommending its content to users.
The two courts that have considered the case to date have held that Section 230 immunity covers alleged violations of the Terrorism Act. But when reviewing different statutes in other 230-related decisions, the 9th Circuit Court of Appeals, which has jurisdiction over West Coast cases, interpreted Section 230 protections more narrowly than other courts. The possibility that this same law could mean different things depending on where someone lives in the United States violates the rule of law. Reconciling such inconsistencies is a common motivation for the Supreme Court to take a case and may explain the current court’s interest in Gonzalez, as well as emerging issues around algorithmic recommendations. Judge Clarence Thomas has also expressed interest in taking up 230 past dissents.
The court could simply take the broad view of 230 protection for platforms, reducing the incentives to review the content those platforms carry. If the court takes a narrower view, it would lead to more content moderation.
Proponents of the narrow position might argue that while broad liability protection was appropriate when the industry first emerged, it is less justifiable now that internet companies are large and dominant. Stricter regulation could place a greater responsibility on companies to exercise discretion over the content they host and bring to potentially millions of people.
On the other hand, those in favor of preserving broad immunity with 230 argue that limiting protections to certain types of content will encourage companies to remove anything inconvenient from afar rather than undertake the difficult task. and controversial to decide which side of the line a piece of content falls on. The result would be the loss of a significant amount of online speech, including anything that has even the most tenuous possibility of creating accountability.
History provides good reason to fear that shrinking immunity could erode or stifle speech. Congress passed an amendment in 2018 stating that Section 230 does not apply to content that violates laws prohibiting sex trafficking. Two days after the law took effect, Craigslist removed its personal section rather than determining what content was actually related to prostitution. Other companies have followed suit, applying equally radical approaches. This experiment suggests that restriction of immunity can reduce the amount of speech available. It may even lead content curators to abandon current efforts to strengthen their oversight, because the more they moderate their content, the more likely they are to be monitored for it.
But the Supreme Court could approach the Gonzalez case in a completely different way, focusing less on content moderation and more on platform design. Section 230 clearly allows companies to remove certain types of objectionable content. What is less clear is whether the law provides similar protection for algorithmic decisions to promote illegal content, which is the issue under consideration in the González Complainants’ objection to YouTube’s algorithms. Any online curator must decide how to serve their content to users. Judges could restrict platforms’ ability to use algorithms to recommend content, a strategy currently at the heart of these companies’ business models and on which all users depend.
The Supreme Court’s resolution of the Gonzalez case will likely represent the most significant update for Section 230 in the foreseeable future. Congressional hearings last year on the issues raised by the law reflected a partisan divide between Democrats calling for more content removal and Republicans calling for less – suggesting that legislative consensus is unlikely to be an issue. so early. If the Supreme Court follows its planned schedule, we will know by the end of June whether or not it decides to remake the future of the internet.
Christopher S. Yoo is a professor of law and founding director of the Center for Technology, Innovation & Competition at the University of Pennsylvania.