EsquireDaily
News

Elon Musk, internet freedom, and how the Supreme Court might force big tech into a catch-22

SCOTUS FOCUS
blank

“The bird is freed,” Elon Musk tweeted on the night he completed his $44 billion purchase of Twitter.

What he didn’t say is that a series of court cases may soon clip its wings.

A self-described free-speech absolutist, Musk has suggested he will loosen Twitter’s content-moderation rules, allow more objectionable speech to remain on the site, and reinstate some users who have been banned. Three days after reassuring advertisers that he won’t let Twitter become a “free-for-all hellscape,” he demonstrated his own personal freewheeling approach to speech when he tweeted (and then deleted) a link to a false conspiracy theory about the husband of House Speaker Nancy Pelosi.

Musk’s takeover and expected overhaul of Twitter comes at a remarkable time. The law of the internet may be about to enter its most dramatic transition since the days of CompuServe and AOL. As Georgetown Law scholar Anupam Chander has written, Silicon Valley flourished in the United States largely because of a well-crafted legal regime. Lawmakers and courts in the late 20th century enacted various substantive reforms that allowed upstart tech companies to operate without fear of legal liability — much as 19th century judges devised common-law principles to promote industrial development. The legal pillars that helped the internet grow are the same ones that would allow Musk to implement many of the reforms he has suggested. But those pillars are under threat.

Last month, the Supreme Court agreed to hear two cases that test the biggest pillar: Section 230 of the Communications Decency Act, the landmark 1996 law that immunizes tech companies from civil lawsuits arising from user-generated content that they host on their platforms. Under Section 230, if a user posts defamation, harassment, or other forms of harmful speech (like, say, spreading conspiracy theories about an 82-year-old victim of assault), the individual user can be sued, but the platform (with a few exceptions) cannot be.

Gonzales v. Google and Twitter v. Taamneh might change that. Gonzalez asks whether Section 230 immunity disappears if a platform recommends or amplifies problematic content to users. Taamneh asks whether a company can be held liable for “aiding and abetting” terrorism if any pro-terrorism content appears on its platform (even if the company aggressively removes most pro-terrorism speech).

Many experts on law and technology were shocked when the court decided to review these cases (which will be heard sometime next year). Typically, the justices won’t hear cases of this sort unless the circuit courts are divided on the underlying legal issues, and there is no real circuit split here. (Lower courts that have considered the question have been fairly uniform in their broad interpretations of Section 230.) And the unusual context of both cases — lawsuits brought by families of people killed in terrorist attacks — may make them imperfect vehicles for resolving the panoply of issues that Section 230 touches.

So the fact that the court took the cases at all suggests that at least some justices want to curtail Section 230. One of them, Justice Clarence Thomas, has already telegraphed his view: In solo concurrences last year and earlier this year, he questioned the law’s broad protections and called on his colleagues to take a hard look at them. (I’ve written before about how ideas that Thomas has floated in solo opinions are increasingly garnering majorities on the newly conservative court.)

Separately, two other cases are waiting in the wings. In NetChoice v. Paxton and Moody v. NetChoice, the tech industry is challenging laws in Texas and Florida that restrict platforms’ authority to remove user-generated content. Politicians in those states believe tech companies are biased against politically conservative speech, and they are trying to reduce what they call censorship. Tech companies argue that the First Amendment (not to mention Section 230!) protects their right to set their own rules for their platforms — including barring speech that isn’t necessarily illegal but is harmful, like misinformation about elections or COVID vaccines.

The Supreme Court hasn’t yet decided whether to take up the NetChoice litigation. But unlike with Gonzalez and Taamneh, there is a circuit split: The U.S. Court of Appeals for the 5th Circuit (in an opinion by an acolyte of Justice Samuel Alito) upheld Texas’s law, while the U.S. Court of Appeals for the 11th Circuit struck down Florida’s similar law. So the justices very likely will weigh in.

The upshot for Twitter and other social-media companies is a new world of largely unknown risk. If the Supreme Court shrinks Section 230, Musk can forget about his commitment to lighter moderation. Nearly everything Twitter does is built around content recommendations produced by complex algorithms, which in turn respond to the unpredictable behavior of human users. (The same is true of all other major social-media companies. Search engines, too.) If a company can be dragged into court anytime an automated quirk of its algorithm amplifies some obscure bit of problematic content, the company will have little choice but to remove far more content on the front end.

Should the court uphold the Texas and Florida laws, companies will also face new penalties for removing too much content. And the conundrum could get even worse: One can imagine blue states passing their own platform regulations that directly conflict with those of red states — say, by requiring platforms to remove the same misinformation that red states insist cannot be removed.

Chander believes the ultimate loser in such a regime would be the very thing that Musk professes to defend: free speech and an open internet.

“If we impose enormous liability on platforms left and right,” he said, “that means those platforms will now act in a way that dramatically reduces the risks to them — and with severe consequences for our practical speech freedoms online.”

Congress, of course, could fix this problem by clarifying the scope of Section 230. Its key provision, after all, is just 26 words long and is 26 years old — it may be time for an update. Congress also could harness its power under the Constitution’s supremacy clause to preempt any state laws that conflict with Section 230’s protections. But reform proposals (from both the left and the right) have not taken off. Until they do, we are all flying blind.

This column was originally published on Nov. 3 in National Journal and is owned by and licensed from National Journal Group LLC.

Related posts

Justices divided on the constitutionality of the federal law that bans “encouraging” immigrants to remain unlawfully in the United States

Ray Morrison

Russia Clamps Down on Protests for Jailed Opposition Leader Navalny

David Appleton

The morning read for Wednesday, Aug. 31

Ray Morrison

Symposium: A not-at-all disguised attempt to shift power away from Latino voters

Ray Morrison

Live audio of oral arguments will continue as court partially reopens to public

Ray Morrison

The morning read for Thursday, Dec. 15

Ray Morrison

Leave a Comment

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy