Press "Enter" to skip to content

Why You Should Care That Mark Zuckerberg, Sundar Pichai and Jack Dorsey Were Just Given Subpoenas By The Senate Committee

The Senate commerce committee voted unanimously on Thursday to serve subpoenas to three major tech CEO’s over a reexamination of the liability protections of Section 230 of the Communication Decency Act. The subpoenas could be a monumental step toward reforming how social media is moderated in the future, especially during a time when the spread of misinformation on social media continues to be credited as a major threat to our democracy.

Section 230 of the Communication Decency Act of 1996 says that interactive computer services cannot be held liable for the content that their users publish, effectively relieving them of the responsibilities of being treated as a publisher.

However, the spread of misinformation and the weaponization of social media in recent years has raised the question of whether or not social media sites should have these freedoms, or whether they should be held to similar legal standards as a publishing company. The section is widely misinterpreted by legal professionals, but is often credited as the single most important law allowing social media companies to exist in the first place.

Media companies (like Grit Daily) are held responsible for the content that they publish. As publishers, we are legally liable for any defamation or false statements that could be published on our behalf, making it important for us as publishers to do our due diligence in fact checking the claims that we make in our content. Section 230 removes that liability for interactive computer services—such as social media or even the comment section on a media company’s website but encourages them to do what is called “Good Samaritan moderating” (more on that later). While Grit Daily and its writers can be held liable for the content on its site, users that publish content in its comment section are not. On social media, this applies to any user that publishes content on a social media platform and protects the social media company from having to censor.

Historically, the idea applied in Section 230 of the Communication Decency Act of 1996 goes back to a time when computers didn’t exist yet.

A 1959 Supreme Court case helped businesses like bookstores prevent being legally prosecuted for the content or merchandise that was found in their store. The Supreme Court ruled that prosecuting a California bookstore for carrying obscene material was a violation of the store’s First Amendment rights. The case argued that as long as a bookstore wasn’t knowingly distributing something illegal—like child pornography, for example—it could not be held liable for the content in the books it was selling. The idea here is that it would be impossible for bookstores to moderate every single book that came into their shop. This idea still applies today in the same way with Section 230 of the Communication Decency Act of 1996, but with social media—and that might be the problem.

A clause in Section 230 encourages interactive computer services to do what is called “Good Samaritan moderating” which immunizes companies that remove content “in good faith,” meaning that, legally, they act as Good Samaritans and not editors or publishers. This clause is ultra complicated and often misinterpreted by legal experts, but allows social media sites to do what is called notice-and-takedown moderating—meaning that when they notice illegal activity on their platforms they are protected if they remove it, but they’re also protected by Section 230 as a whole if they leave it up.

Obviously this type of dual-immunity protects social media companies if they were to, say, come across content that is explicitly illegal—like child pornography or a live stream of an illegal act like a mass shooting. But this gets even more murky with topics that are not explicitly illegal like political disinformation or fringe conspiracy theories.

These issues, while not necessarily a legal problem that can be enforced upon right now, are equally as valid and pose a major threat to democracies around the globe—not just in the United States. A broad question on how to change these laws or enforce new moderation techniques has been in talks for years, but since political leaders are only just starting to grasp the scope of the issue, the clock is ticking on how to address these problems only after they’ve caused irreversible damage (like a genocide in Myanmar, for example). Both President Trump and former Vice President Joe Biden have called for a complete revocation of Section 230—naturally, though, for different reasons.

The explosive revelation that Cambridge Analytica, a political data firm based out of the United Kingdom, harvested and manipulated the data from millions of Facebook users in order to rig elections around the world put into overdrive the need for a reexamination of Section 230 and its subsequent clauses. Cambridge Analytica led to widespread disinformation campaigns, platform manipulation and even voter suppression in places like the United States.

But here’s the thing: much of what Cambridge Analytica did with the data it obtained through Facebook was not necessarily illegal at the time. How the company obtained data was, but what it did with it was not—and social media companies are at a loss with how to handle this type of platform manipulation in the future.

A new bill introduced by Senator Lindsey Graham (R-SC) triggered the subpoenas to the tech CEO’s and would reexamine the Good Samaritan clause in Section 230 of the Communication Decency Act of 1996. The bill would revisit the law and examine how to re-assess how tech companies like Facebook, Twitter and Google should approach moderating content on their platforms.

The rapid-fire spread of misinformation has drawn bipartisan legal scrutiny for quite some time now, and the unanimous agreement to serve CEO’s Mark Zuckerberg, Jack Dorsey and Sundar Pichai with subpoenas serves only as a reflection of the need for legal reform surrounding social media content moderating. The upcoming hearing is separate from the antitrust hearings that served subpoenas to both Zuckerberg and Pichai and called for them to appear before a House antitrust committee. It will question how the CEO’s think content moderating could be approached in the future to prevent the spread of dangerous information.

The hearing associated with the subpoenas has not yet been scheduled, but it seems unlikely that any change would come before the 2020 election, much less the end of the year.