Rana, I’m sure you’ve been watching the fascinating case on internet speech that will come before the Supreme Court on February 21. It’s the first time the top court has considered Section 230 of the Communications Decency Act — the key provision that has determined online speech in the US, giving internet companies broad immunity both when they allow questionable material on their platforms and when they decide to take some of it down.
This whole issue is beset with so much rhetorical and political posturing that it’s hard to separate out the real issues. Elon Musk’s claim that he bought Twitter to “save” it from the woke brigade is a case in point. Ensuring that Twitter remains a megaphone for the Musks and Trumps of the world doesn’t strike me as a big win for the cause of ensuring a healthy public discourse. But I’m constantly struck by how everyone I meet who’s not a liberal truly believes he is doing important work.
The issue has been further complicated by the market power and politics surrounding Big Tech. When Congress adopted Section 230 in 1996, it was trying to protect small online bulletin boards. The lower courts have looked at this issue many times and always come down on the side of an expansive interpretation of the law to give companies immunity. But the role that today’s tech giants play in the media world is completely different from the small-scale bulletin boards of the 1990s.
I think Clarence Thomas was right in 2020 when he said the lower courts had conferred “sweeping immunity on some of the largest companies in the world.”
But this is where it gets really tricky. At what point should internet companies lose their legal immunity? And is there a way to move the line without wrecking the way much of the internet operates?
This is what the Supreme Court will consider in less than two weeks’ time in the González vs Google case. It was brought by the family of someone who died in the 2015 Paris terrorist attacks that killed 130 people. They argue Google’s YouTube breached the Terrorism Act by promoting ISIS videos. Google is claiming immunity under Section 230. The question is whether YouTube should lose that immunity because it actively pushed ISIS videos in front of people.
If you told someone they ought to watch a video because it’s important, then that sounds to me like you’re editorialising and you should take some responsibility. But does an algorithmic recommendation rise to the same level?
The Department of Justice thinks it does; it filed a brief with the court arguing that the video-recommending algorithms “communicate a message from YouTube that is distinct from the messages conveyed by the videos themselves.” In other words, recommendations are a form of comment.
This obviously raises a thorny question for just about any company on the internet. Much of what you see online these days is the product of a recommendation system. If every decision those systems made stripped their makers of legal protection then it could throw a real spanner in the works.
So could the court come up with a narrow interpretation that didn’t wreck a lot of online business models? I would like to think that the purpose of the algorithmic recommendation could somehow figure into this decision. If the machines were acting solely for your benefit, filtering out the noise to give you more of the stuff you want, that seems to me like something that deserves immunity under Section 230.
The appeals court judges who heard the González case appear to have this rose-tinted view of how algorithms work. They concluded that Google “sends third-party content to users that Google anticipates they will prefer,” so it gave Google a pass.
But we all know the algorithms are not as innocent as that makes them sound. They are designed to boost engagement. If an internet company is pushing things on me mainly for its own business benefit, without enough consideration about my overall wellbeing, then I’d be happy for it to bear more responsibility. The trouble is it’s impossible to discern the true intent behind any algorithmic recommendation.
So as usual when it comes to anything to do with online speech, I feel impossibly divided. I definitely want internet companies to take an active role in content moderation — but I worry about their motives and would like them to be held more to account.
Like many things that are wrong with the internet, I instinctively come back to my own simplistic, catch-all solution: More competition. If there were enough alternatives — if Trump’s Truth Social could attract a large enough audience — then we wouldn’t be worrying so much about all of this.
The internet was meant to be a free-for all where all views would flourish. The trouble is, markets like social media and internet search are more like the old world of broadcast TV; there are really only a couple of real alternatives. For decades, the US had something called the “fairness doctrine” to force broadcasters to maintain balance. No one wants anything like that for the internet. Competition is the only answer.
Rana, I know you’ve thought a lot about this particular issue over the years. What should the Supremes do?
Edward Luce is on book leave and will return later this month.
Cory Doctorow’s piece on “The ‘enshittification’ of TikTok” (what a headline!) spells out the depressing lifecycle that big internet platforms seem to go through. First they try to delight their users. Then they exploit their users so that advertisers and other business customers can make money. Then they step in front of the business customers to take more of the cake for themselves. Then it’s time for the cycle to start again.
I don’t agree with much of what Dan McQuillan has to say about ChatGPT but he does a good job of summing up the case against AI: that it is a tool big companies and the politically powerful hope to use to further subjugate us. I start from the position that, like all new technologies, AI is valueless. The only important bit is to question the goals of the people who control it. I am more optimistic than McQuillan.
Joshua Oliver’s piece in the FT this weekend about the final days of Sam Bankman-Fried’s FTX empire makes riveting reading. Everyone knew the walls were closing in except SBF.
Our US edition of the FTWeekend Festival is back! Join Ta-Nehisi Coates, Alice Waters, Jancis Robinson, your favorite FT writers, and more on May 20 in Washington, DC, and online. Register now and as a newsletter subscriber, save $20 off using promo code NewslettersxFestival.
Rana Foroohar responds
Richard, great question. I’m a firm believer that 230 needs significant modifications. I don’t think that any company should be immune if they are knowingly profiting from harms that occur because of paid advertising, promoted posts, or even algorithms that choose to push violent content because it results in more eyeballs. I also think they should be liable for content that violates civil rights, antitrust rules, or harassment laws. The “we are special because we deal with so much content on the internet” argument has never felt particularly convincing to me.
Every business that says they are “special” (like banking in the run-up to 2008, or pharmaceutical firms today) tends to be trying to pull one over on regulators and the public. One reason that Big Tech companies are so big and powerful is that they don’t have humans moderating content, like news organisations do; that’s why they have been able to manage, monetise and ringfence the algorithmic distribution of so much content. Businesses like this couldn’t work if humans were policing everything. So yes, I think that the Supremes should issue a ruling that reflects the idea that the laws that exist offline should also exist online.
But more than this, I totally agree with your point about competition. I’m thrilled about the new Google case brought by the DoJ, which I wrote about last week, not only because the public will have a key role in deciding, but because it’s the perfect antitrust case, encompassing the problems that occur when a single player monopolises all sides of a given market. We need a break up of Big Tech firms to create more innovation, and a safer online world for us all. I like to think of the possibilities through the lens of the motor industry. Remember when they had no liability for crashes and claimed it was impossible for them to take responsibility? Then they were forced to, and we got seat belts. They are coming online, too, I suspect.