Big Tech’s Big Moment Finally Arrives at Supreme Court

CommentaryIn what is shaping up to be an action packed term at the U.S. Supreme Court, the justices upped the ante Tuesday, Oct. 4, by granting review in another banner case: Gonzalez v. Google, Inc. The case centers on interpretation of a law that has plagued social media users, especially conservatives, for some time by allowing tech giants to ban, promote, alter, or recommend content based on the user’s point of view. That law is the Communications Decency Act (CDA) of 1996. Section 230(c)(1) of the Act (which bears the long title, “Protection For ‘Good Samaritan’ Blocking and Screening of Offensive Material”) shields “publishers” (like Big Tech platforms) from civil liability for hosting offensive content created by others. It says: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Section 230(c)(2) of the Communications Decency Act also provides “Good Samaritan” protection from civil liability for operators of interactive computer services that engage in the good faith removal or moderation of third-party material they deem “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” Courts generally apply a three-pronged test when deciding whether Section 230 shields a provider from a claim of liability Courts will look to see if (1) the defendant is a “provider or user” of an “interactive computer service”; (2) the cause of action is based on information provided by another information content provider; and (3) the claim treats the defendant as being the “publisher or speaker” of the harmful information at issue. If all three parts are satisfied, then the defendant is immune from liability. In short, under Section 230, online platform providers that host or republish speech are protected against a range of laws that might otherwise be used to hold them legally responsible for what others say and do. The provision leaves it to companies to decide whether certain content should be removed and does not require them to be politically neutral. So “good faith removal or moderation” of third-party material has in increasing measure protected the notoriously left-wing Big Tech cabal, allowing them to adjust content as they wish. Curiously, the content adjustment only seems to work against content of one perspective: the conservative one. The censorship of conservative voices by Big Tech is well-documented. Some on Capitol Hill have gone so far as to claim that Big Tech practically “owns” the government—a claim that seems increasingly well-founded considering the government’s documented efforts to control messaging during both the pandemic, and on politically inconvenient stories. But until Gonzalez, no vehicle presented the appropriate opportunity to consider the question on Section 230’s application. Which is not to say this is Big Tech’s first run at the Supreme Court. As recently as 2020, the Supreme Court turned away the petitioners who also came to the court asking for clarification on the parameters of Section 230 in a case called Malwarebytes Inc. v. Enigma Software Group USA, LLC. In a statement accompanying the denial of the petition for certiorari, Justice Clarence Thomas wrote that “in an appropriate case, we should consider whether the text of this increasingly important statute [the Communications Decency Act] aligns with the current state of immunity enjoyed by Internet platforms.” Thomas went on to say that “And in the 24 years since [its adoption, we] have never interpreted this provision. But many courts have construed the law broadly to confer sweeping immunity on some of the largest companies in the world.” Congress passed the Communications Decency Act in 1996 after a New York court held an internet service provider liable for a defamatory statement posted on that website’s message board. It was sold to Americans as a necessary guardrail against pornography and obscenity online, while also allowing free speech to flourish. But now, more than 25 years later, quite the opposite has happened: while Big Tech platforms have throttled, banned, or de-platformed undesirable (read: politically unpopular) speech online, pornography and obscenity have unfortunately flourished. In granting review in Gonzalez v. Google, the Supreme Court will have to consider some damning facts. The family of Nohemi Gonzalez, the only American killed in the 2015 terrorist attacks on Paris, sued Google (through its YouTube service) for aiding and abetting her ISIS killing by “recommending” content by a militant Muslim group. The suit alleges that by allowing its algorithms to recommend video content from the terrorist group, YouTube is no longer entitled to Section 230 immunity from civil liability. The petitioners also allege that YouTube provided “material support” to ISIS without which “t

Big Tech’s Big Moment Finally Arrives at Supreme Court

Commentary

In what is shaping up to be an action packed term at the U.S. Supreme Court, the justices upped the ante Tuesday, Oct. 4, by granting review in another banner case: Gonzalez v. Google, Inc.

The case centers on interpretation of a law that has plagued social media users, especially conservatives, for some time by allowing tech giants to ban, promote, alter, or recommend content based on the user’s point of view.

That law is the Communications Decency Act (CDA) of 1996. Section 230(c)(1) of the Act (which bears the long title, “Protection For ‘Good Samaritan’ Blocking and Screening of Offensive Material”) shields “publishers” (like Big Tech platforms) from civil liability for hosting offensive content created by others. It says: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

Section 230(c)(2) of the Communications Decency Act also provides “Good Samaritan” protection from civil liability for operators of interactive computer services that engage in the good faith removal or moderation of third-party material they deem “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”

Courts generally apply a three-pronged test when deciding whether Section 230 shields a provider from a claim of liability Courts will look to see if (1) the defendant is a “provider or user” of an “interactive computer service”; (2) the cause of action is based on information provided by another information content provider; and (3) the claim treats the defendant as being the “publisher or speaker” of the harmful information at issue. If all three parts are satisfied, then the defendant is immune from liability.

In short, under Section 230, online platform providers that host or republish speech are protected against a range of laws that might otherwise be used to hold them legally responsible for what others say and do.

The provision leaves it to companies to decide whether certain content should be removed and does not require them to be politically neutral. So “good faith removal or moderation” of third-party material has in increasing measure protected the notoriously left-wing Big Tech cabal, allowing them to adjust content as they wish.

Curiously, the content adjustment only seems to work against content of one perspective: the conservative one.

The censorship of conservative voices by Big Tech is well-documented. Some on Capitol Hill have gone so far as to claim that Big Tech practically “owns” the government—a claim that seems increasingly well-founded considering the government’s documented efforts to control messaging during both the pandemic, and on politically inconvenient stories. But until Gonzalez, no vehicle presented the appropriate opportunity to consider the question on Section 230’s application.

Which is not to say this is Big Tech’s first run at the Supreme Court.

As recently as 2020, the Supreme Court turned away the petitioners who also came to the court asking for clarification on the parameters of Section 230 in a case called Malwarebytes Inc. v. Enigma Software Group USA, LLC.

In a statement accompanying the denial of the petition for certiorari, Justice Clarence Thomas wrote that “in an appropriate case, we should consider whether the text of this increasingly important statute [the Communications Decency Act] aligns with the current state of immunity enjoyed by Internet platforms.” Thomas went on to say that “And in the 24 years since [its adoption, we] have never interpreted this provision. But many courts have construed the law broadly to confer sweeping immunity on some of the largest companies in the world.”

Congress passed the Communications Decency Act in 1996 after a New York court held an internet service provider liable for a defamatory statement posted on that website’s message board. It was sold to Americans as a necessary guardrail against pornography and obscenity online, while also allowing free speech to flourish.

But now, more than 25 years later, quite the opposite has happened: while Big Tech platforms have throttled, banned, or de-platformed undesirable (read: politically unpopular) speech online, pornography and obscenity have unfortunately flourished.

In granting review in Gonzalez v. Google, the Supreme Court will have to consider some damning facts. The family of Nohemi Gonzalez, the only American killed in the 2015 terrorist attacks on Paris, sued Google (through its YouTube service) for aiding and abetting her ISIS killing by “recommending” content by a militant Muslim group. The suit alleges that by allowing its algorithms to recommend video content from the terrorist group, YouTube is no longer entitled to Section 230 immunity from civil liability.

The petitioners also allege that YouTube provided “material support” to ISIS without which “the explosive growth of ISIS over the last few years into the most-feared terrorist group in the world would not have been possible.” The petitioners claim that “videos that users viewed on YouTube were the central manner in which ISIS enlisted support and recruits from areas outside the portions of Syria and Iraq which it controlled.”

The question presented in Gonzalez is whether Section 230(c)(1) of the Communications Decency Act immunizes interactive online providers (like YouTube) from liability when they make targeted recommendations of information provided by another information content provider (like ISIS), or only limits the liability of interactive computer services when they engage in traditional editorial functions (such as deciding whether to display or withdraw) concerning that information.

Thus far, the modern-day Roberts court is largely guided by textualism—faithfulness to the plain and ordinary meaning of words within the laws they interpret. So, the Gonzalez case provides a golden opportunity for the Court to clarify exactly what the Communications Decency Act Section 230 shield covers and what it doesn’t.

Knowing that most Americans consume their news on digital devices, and knowing that the flurry of midterm elections will shortly be upon us, the stakes couldn’t be higher.

Gonzalez v. Google won’t be the only time the high court considers the legality of online censorship this term. The Supreme Court also said Monday, Oct. 3, it would consider a separate but related lawsuit involving Twitter. That case was filed by the family of Nawras Alassaf, who was killed in a terrorist attack in Istanbul in 2017. The family claims Twitter, Facebook, and Google violated the Anti-Terrorism Act by allowing ISIS to use their sites. And both Florida and Texas have signaled they want the Supreme Court to review their laws aimed at stopping social media censorship as a violation of the First Amendment to the Constitution, which has resulted in a circuit split.

Reprinted by permission from The Daily Signal, a publication of The Heritage Foundation.

Views expressed in this article are the opinions of the author and do not necessarily reflect the views of The Epoch Times.


Follow

Sarah Parshall Perry is a senior legal fellow in the Edwin Meese III Center for Legal and Judicial Studies at The Heritage Foundation.