In The World Of Facebook, Self Policing Is Key

Published on April 19, 2019

When the New Zealand shooter opened fire in a Christchurch mosque just last month, it was live streamed on Facebook where it was seen by as many as 200 people before being taken down by a moderator. Meanwhile in upstate New York, a man posted violent threats against Congresswoman Ilhan Omar on Facebook. The posts and profile have now been removed, but only after the FBI got involved in the case. In an online community consisting of over one billion active users, Facebook is finding it impossible to get ahead of inappropriate content posted to it’s site. To combat these types of posts, the network is calling upon its own users to police one another more than ever before. This highlights one major issue in the presence of the social network itself.


It should go without saying that you wouldn’t run through a crowded public space screaming your frustrations with the government. Most people also wouldn’t walk into a room full of their family and friends and begin telling everyone that they’re suicidal. It’s not that reaching out isn’t okay, it’s that Facebook is no less of a public space than your office, church, school, or local hang out spot.

There is a time and a place for everything, and people seem to forget that when hiding behind the veil of their device. Facebook, for example, outsourced a team of moderators that have been hired specifically to check on reported posts to make sure that they’re not in violation of the company’s Terms of Use. If no one reports the post, though, what happens? It would be impossible to comb through every Facebook post internally to police how the network’s billion active users are interacting with one another. Machine learning can also only go so far, and in an online world where dozens of languages are used on a regular basis, it would take a major algorithm to be able to moderate and search for inappropriate content across the whole of Facebook.

Facebook maintains a strict “remove, reduce, inform” policy on offensive content. The company is investing resources into teaching its users about general online etiquette. Meanwhile, it’s also working to remove and reduce the presence of harmful content by moderating reported posts. It’s also expanded reporting options to include topics like self harm, drug use, false news, hate speech, and even terrorism. The company has also renamed its reporting option. It’s now found under a button titled “find support or provide feedback.”

Call Out Vs. Call In

Instead of implementing machine learning, Facebook turned toward its users and relied heavily on self-policing. It’s not just about flagging dangerous or inappropriate content anymore. It’s about policing one another anonymously to assure a more user-friendly environment online. If a user finds themselves offended, concerned, or otherwise afraid of the content someone has posted on Facebook, they can send a subtle nudge to a team of moderators to check on the post. If the post violates Facebook’s ever-expanding Terms of Service, it’s either removed or flagged for further review. The original poster will also be subtley alerted that their post was flagged. Instead of “calling out” through the comments, users are told in private to reconsider what they’ve done.

A New York Times article published earlier in 2019 argues that call-out culture provides no inherent benefit to society. The article claims that it simply incites animosity from one peer to another. Call-out culture, for the unaware, has seen a prevalent rise in the online era. It calls upon people to “call out” their peers in times of social faux pas. Facebook clearly understands this. And users can anonymously report another person for posting things that can be considered offensive. This may diminish animosity toward peers. But it still has severe drawbacks for the moderators having to interact with this content on a constant level.


The question remains whether a culture of self-policing contributes to anything positive on its own. A room full of Facebook moderators are subject to a daily life of mental trauma and stress. But an alternative means that offensive content would be left untouched on a heavily used website. A team of moderators is more than necessary as long as users continue to upload highly offensive content.

With Facebook having to constantly update its “remove, reduce, inform” policy, it begs one final question: Who, exactly, benefits from social networks like Facebook? And what sort of value does it provide in today’s society.


Julia Sachs is a former Managing Editor at Grit Daily. She covers technology, social media and disinformation. She is based in Utah and before the pandemic she liked to travel.

Read more

More GD News