Press "Enter" to skip to content

Opinion: The Section 230 Hearing Was Like Watching Your Boomer Uncle Ask You To Explain A Meme

Facebook’s Mark Zuckerberg, Google’s Sundar Pichai and Twitter’s Jack Dorsey met remotely with the Senate Commerce Committee Wednesday morning to discuss Section 230 of the 1996 Communications Decency Act. The section of the act, which has been called the single most important piece of law in regard to social media, is what dictates how or if content is moderated on websites like Facebook, Google and Twitter.

Much of the discussion on the law in recent years has revolved around whether Section 230 should be revoked or reformed. Unfortunately today’s hearing had less to do with the law and, instead, became an opportunity for Republican senators to berate the three CEOs over accusations of censorship on conservative opinions—a claim that serves the interest of President Trump, rather than the American people that the senators represent. The nearly four hour event was, essentially, a dumpster fire. Here’s why:

What is Section 230?

Section 230 of the Communications Decency Act of 1996, as well as its “Good Samaritan” moderation clause is a law that grants immunity to internet companies that allow users to publish their own content. The law protects the companies from being held liable in case someone were to, say, post a violent threat on the website and then carry it out in real life. The Good Samaritan clause also adds additional protection to internet companies that choose to moderate or remove content that could be harmful, since it could be argued that removing that content is a violation of free speech, a Constitutional right in the United States.

Put together, Section 230 and the Good Samaritan clause enable internet to make important content moderation decisions and dictate their own terms and conditions without the threat of legal action. The law both protects the company from being sued for violating free speech rights, since the Good Samaritan clause says that the company cannot be held liable for moderating content it determines could be harmful, and prevents it from being held liable if that harmful content were to stay on the platform. To put this into an example, the law prevents Facebook from being held legally responsible for allowing the New Zealand mosque shooter to live stream his attack, but also protects the company from being sued for taking that content down.

Lawmakers and tech companies in recent years have both fought for and against Section 230, arguing that the law allows companies like Facebook to enable misinformation and possible election interference to run rampant on their platforms. In 2016, for example, the lack of moderation on who could post political advertisements led to one of the biggest tech scandals of all time, Cambridge Analytica.

Social media companies, however, argue that Section 230 is an important piece of legislation that upholds freedom of speech on the internet. The companies argue that without its protections the platforms could not exist as we know them. Considering that each of these companies—Facebook, Google and Twitter—represent billions of dollars in the U.S. economy, a significant change to Section 230 could upend their business models entirely.

Both President Trump and former Vice President Biden have brought up concerns over Section 230 and argued that it should be revoked, but for different reasons. Trump, who argues that Section 230’s Good Samaritan clause allows for “selective censorship,” attacked companies like Twitter for flagging several of his posts as misinformation in 2020 during the ongoing COVID-19 pandemic. Biden also wants to revoke Section 230, but for different reason. The former Vice President argues that moderating content only after its been published on a website enables the spread of harmful misinformation that could upend American democracy as we know it.

So what happened at the senate hearing?

The CEOs of Facebook, Google and Twitter were subpoenaed last month to discuss a possible revocation or reform of Section 230 with the Senate Commerce Committee. The committee, made up of several senators from around the country, grilled each CEO on the role that each of their companies play in upholding democracy. What was supposed to be a discussion on Section 230, however, turned into nearly four hours of senators presenting the CEOs with different screen shots of posts on their respective sites and asking them why the posts were or were not removed. It was essentially like watching a millennial explain to a boomer how Twitter and Facebook work.

Several Republican Senators used the hearing as an opportunity to further accuse the CEOs of engaging in what President Trump once called “selective censorship,” and referenced posts that had been flagged as misinformation on platforms like Twitter and Facebook. Questions during the hearing focused mostly on why Twitter and Facebook chose to “censor” stories from conservative news outlets like Breitbart and the New York Post—both of which are known for being unreliable sources—and not “liberal media” sources as well. Much of the hearing saw Republican senators berating the CEOs with questions that served President Trump by perpetuating conspiracy theories about the social media sites being out to censor the President. Little discussion, however, was made on behalf of the American people in regards to protecting social media users from being exposed to harmful misinformation that could have a negative impact on American democracy. The focus was not on whether the information being discussed was true, but instead on why people like President Trump couldn’t spread it regardless.

Senators Ted Cruz (TX), Ron Johnson (WI), Marsha Blackburn (TN), Mike Lee (UT) and Roger Wicker (MI) argued that the platforms engage in the censorship of right-wing voices, but gave little opportunity for the CEOs to discuss or defend the actions of their respective companies. Dorsey and Zuckerberg were asked questions about specific posts in some instances, and broad strokes of alleged censorship in others. Senator Ron Johnson, at one point, asked Dorsey why Twitter did not remove a gag post about the Senator strangling a dog, in which the original poster stated within the Tweet that the accusation was a lie operating as a joke. Later, Senator Tammy Duckworth (IL) condemned the Senate Committee for asking questions on behalf of the President and not the American people.

Both Dorsey and Zuckerberg defended their companies’ content moderation policies and said that the liberal use of the term “censorship” does not apply to what their companies are doing. Each clarified that content removed for violating their terms and conditions does not equate to neither censorship nor partisanship on their behalf. Little discussion, though, was made on whether Section 230 should see any changes or revocation. Dorsey had, at that point, posted a statement on Twitter that mimicked the sentiments in his opening statement at the hearing:

“I want to focus on solving the problem of how services like Twitter earn trust,” said Dorsey in his opening statement. “And I also want to discuss how we ensure more choice in the market if we do not. During my testimony, I want to share our approach to earn trust with people who use Twitter. We believe these  principles can be applied broadly to our industry and build upon the foundational framework of Section 230 for how to moderate content online. We seek to earn trust in four critical ways: (1) transparency, (2) fair processes, (3) empowering algorithmic choice, and (4) protecting the  privacy of the people who use our service. My testimony today will explain our approach to these principles,” Dorsey said.

So are the tech companies doing the right thing?

Not necessarily. Much like Mark Zuckerberg’s bombshell claim that Facebook would not fact check political advertisements leading up to the 2020 election, Dorsey’s appearance in front of the Senate Commerce Committee today garnered similar criticism for Twitter. Dorsey, when grilled on Twitter’s misinformation policy, clarified that posts that deny historical events like the Holocaust were not against Twitter’s terms and conditions. Facebook and TikTok banned holocaust denial posts earlier this year, claiming that they often went hand in hand with harmful hate speech on the platforms.

The revelation that Twitter does not include conspiracy theories like Holocaust denial in its list of things that violate its terms and conditions opened up the door for discussion on what Twitter removes from its platform. If false information is not banned, then what is? Several Senators asked both Zuckerberg and Dorsey why, if the companies could not prove the story was false, people were barred from sharing a New York Post exposé on Hunter Biden? Dorsey then clarified that the article was removed from Twitter for violating its policy on hacked materials, though it remains unclear that the materials in the article were obtained through hacking. Zuckerberg chimed in that the post was removed following advice from the FBI to be diligent in recognizing potential disinformation, but neither CEO could provide evidence that the story was an example of a Russian disinformation campaign.

Twitter’s political policies are often a red herring for its lack of oversight.

Dorsey famously revealed last fall that Twitter would not allow any political advertising on its platform leading up to the 2020 election in response to Facebook’s political ad controversy in 2019. But the issue is that political ads were never Twitter’s problem in the first place—misleading or false information, though, always has been. Unlike Facebook though, Twitter doesn’t have a clear policy on false information. Twitter, like Facebook, relies on its users to self-monitor one another when it comes to the content on the site. Each company offers a way for users to report posts that might violate their respective terms and conditions, but Twitter doesn’t offer an option for users to flag a post as misleading or false.

Facebook’s report screen.
Twitter’s report screen.

When grilled on this, Dorsey claimed “we rely on people calling that speech out” in reference to Twitter’s culture of encouraging discussion on a topic that might be false, rather than taking it down. But a 2019 update to the social media site allowed users to turn off replies to their tweets—a move that removed the ability for users to call out falsehoods if the original poster didn’t want them to.

The feature not only promotes the spread of false information, but goes against Twitter’s user culture of discussing a Tweet that might be false. Without a way for users to report a Tweet that contains false information, the post could simply circulate the web with no clarification attached. This isn’t a big problem with the President—especially as Twitter flags Trump’s misinformation consistently—but enables fringe accounts on the platform to get away with spreading false information—like claims that COVID-19 is a hoax, or that Twitter is practicing selective censorship against conservative voices. The latter of which, has sent many conservative Twitter users onto fringe websites that enable the spread of extremist rhetoric.

The TLDR:

Tech companies like Twitter, Facebook and Google have largely been exempt from the journalistic integrity that media publishers are held to over Section 230. Today’s hearing, which was supposed to discuss a possible reform or revocation of that piece of legislation, did little but exemplify the severity of the issue as both Republican and Democrat Senators dropped the ball on any effective discourse on the topic. While some senators argued that the tech companies are acting as liberal mouthpieces by stifling conservative viewpoints—a narrative that stemmed from a lack of oversight on social media, others brought up the harm that is caused by allowing tech companies to create their own rules on what is and isn’t allowed when it comes to misinformation.

What could have been an effective conversation on the role that misinformation plays on social media and, therefore, within our democracy, was instead stifled by four hours of Senators asking why they were being cyber-bullied on social media. This close to the election, the dumpster fire that was this morning’s Senate Commerce Committee hearing likely did little more than amplify the very misinformation it aimed to disseminate.