The US Supreme Court on Tuesday appeared hesitant to make sweeping changes to legal protections for internet publishers as it began considering a pair of cases that could fundamentally alter laws governing online platforms such as Google and Twitter.
The two cases mark the first time the high court will directly weigh in on Section 230 of the Communications Decency Act, which protects online platforms from legal liability over content posted by their users and is widely seen as central to the development of online communications.
In the case before the court on Tuesday, Gonzalez vs Google, the relatives of a 23-year-old American student killed in a 2015 Isis attack in Paris accuse Google of breaking US anti-terrorist laws by helping the terror group spread its message by hosting Isis videos on its YouTube platform and recommending related content to users via algorithms that rely on inputs such as viewing history. They argued Section 230 was enacted before the rise of algorithms, which have fundamentally changed how content is recommended and consumed online.
During oral arguments, the Supreme Court justices seemed sceptical about interpreting the law to expose platforms to liability for recommended content. Justice Elena Kagan said there was “a lot of uncertainty” in adopting the petitioner’s argument “just because of the difficulty of drawing lines in this area”.
She and other justices suggested the US Congress may be best placed to address such a complex matter. Kagan told the petitioner’s lawyer, Eric Schnapper, that “once we go with you, all of a sudden we’re finding that Google isn’t protected, and maybe Congress should want that system. But isn’t that something for Congress to do? . . . These [justices] are not the nine greatest experts on the internet.”
Some justices also raised the risk that eliminating immunity under Section 230 could trigger a wave of legal challenges. “Hundreds of millions, billions of responses to inquiries on the internet are made every day . . . every one of those would be a possibility of a lawsuit,” said Chief Justice John Roberts.
Google has argued there is no connection between its recommended videos and alleged violations of the Anti-Terrorism Act. It also warned that losing immunity under Section 230 would have significant knock-on effects given the widespread use of algorithms to sort content online.
Google’s lawyer, Lisa Blatt, told the high court that Section 230 “created today’s internet”. She addressed the use of algorithms by saying that “all publishing requires organisation” and that these “features [are] inherent in all publishing”.
Justices will hear oral arguments in a related case, Twitter vs Taamneh, on Wednesday. That case arises from a deadly Isis attack at a nightclub in Istanbul, Turkey, in 2017. The relatives of one victim sued, alleging Twitter, Facebook and Google knowingly assisted the terrorist organisation by failing to stop its supporters from using their sites to disseminate their content.
Section 230 has become a flashpoint for Big Tech critics who argue it has allowed platforms to skirt responsibility for the spread of damaging material and impede freedom of speech by sidelining certain users.
A brief filed by the US Department of Justice warned against an “overly broad reading of Section 230”, which it said “would undermine the enforcement of other important federal statutes by both private plaintiffs and federal agencies”. Children’s wellbeing featured in briefs filed against Google’s position, with Child USA, a rights group, arguing the immunity granted by Section 230 has jeopardised children’s protection online amid a boom in internet content.
A string of tech companies, including Microsoft, Meta and Reddit, have filed briefs defending Google’s position. Facebook parent Meta argued that algorithms are a “critical component” of its anti-terrorism policies and that a broad Supreme Court decision “would encourage websites to remove all but the most benign views, turning a marketplace of diverse perspectives into a platform for orthodox perspectives”.