Press "Enter" to skip to content

TikTok’s Violent Video Problem Is Nothing New To Social Media

Earlier this week, a violent video of an apparent suicide began circulating TikTok heavily. Users that utilized the For You Page feature on the app (which most users do) were being fed different versions of the same video that apparently shows a man shooting himself in the head. The suicide was first live streamed on Facebook in August before being taken down, and has since begun circulating TikTok via the For You Page. But it’s not the first time social media has struggled to stop the spread of violence—like suicide or mass murder—from circulating its platform.

Facebook struggled with flagging and removing a video of the New Zealand mosque shooting in the summer of 2019. The video, which was originally live streamed before being flagged and removed, was shared all over Facebook at least 1.5 million times before finally tapering off once and for all. Although such a degree of violence is clearly and explicitly against Facebook’s terms of use agreement, the situation only exemplified the now age-old issue that is social media moderation.

Because Facebook depends on users to self-police one another before content is flagged and removed, the shooter was able to get away with the video staying up on his Facebook page long enough for 4,000 people to view and save it. Once it was taken down it began recirculating from new accounts that posted it to Facebook. With each new share the platform had no way of removing the video until it was flagged for review by Facebook moderators—a third-party group of actual humans that are responsible for reviewing content that has been flagged by Facebook users for violating the platforms Terms and Conditions.

The whole thing seems vastly over complicated and emotionally taxing, but the alternative would see social media companies operating as publishers where each piece of content is moderated in real time before being released to the public. Companies like Facebook don’t want to do this, especially since their entire business model relies on making users feel like they can post what they want to post exactly when they want to post it. Without that instant gratification, the very nature of social media would be gone.

If each post were given the same treatment as Facebook’s advertising platform—meaning it had to be approved before being published—users would quickly lose interest and feel as if they were being policed by a higher power. And, as we’ve learned from the great mask debate of 2020; people don’t like to be told what they can and cannot do, wear or say. Plus, how do you moderate each and every post and comment from billions of users around the world, and expect the moderators to understand every bit of context and nuance in the thousands of languages spoken on the platform at any given moment?

The answer would mean a complete re-structuring of how social media inherently works, and would likely de-monetize (at least temporarily) a multi-billion dollar industry that employs thousands of people around the world. Companies could implement artificial intelligence to moderate content automatically—but as we learned from Tumblr’s ban on nude content in 2019, that can go awry pretty easily.

Facebook’s third-party moderators are known to suffer from deteriorating mental health thanks to the egregious content they’re hired to sift through. Not only are moderators required to understand the nuances of social media Terms and Conditions, they’re required to apply those ideas to the content that gets flagged by everyday users on the platform. Often, a piece of content might be detrimental, but not against the company’s Terms and Conditions at the time. For evolving issues like QAnon, the crux of the issue was rapid growth on social media during a period of time when the content wasn’t yet banned. Now, it’s one of the biggest disinformation scandals of our time—and has made its way onto Capital Hill.

Facebook’s global scale poses an entirely new issue when it comes to political issues arising in developing countries where few moderators have expertise on the subjects at hand. In Myanmar, Facebook played a significant role in the rise of disinformation and propaganda that fueled the genocide on the Rohingya people. While disinformation might not be as shocking or sensational as a live streamed suicide, it still causes real-world damage—usually at the hands of the working class.

All of this being said, this problem isn’t unique to Facebook. Twitter faced similar controversy recently when a call to ban white nationalists from using the platform was shut down. Nuanced issues like racism are often subjective, making it hard to implement a complete ban on a concept that so many don’t even believe still exists today. Content depicting suicide or murder is another story, but still falls into the same issue—how do you moderate content in real time on a platform that was designed to give people instant gratification?

The For You Page, TikTok’s token algorithm that made the app one of the most successful social networking platforms of all time, virtually shattered the follow-heavy culture that makes Twitter and Instagram so successful. Users don’t have to be following a page to be shown content from it, evening the playing field for new creators by giving every user an equal chance at going viral on the platform. Where Twitter and Instagram favor users that have massive audiences, TikTok’s ability to create new audiences with each post is partly what makes it so popular, and so easy for new, up and coming creators to achieve online celebrity status virtually overnight—but it’s also the platform’s crux.

TikTok has, for the most part, gotten a handle on the spread of the suicide video that circulated its platform for several days. The company issued a statement on Tuesday saying that it would be banning accounts that repeatedly try to upload the video. “Our systems have been automatically detecting and flagging these clips for violating our policies against content that displays, praises, glorifies, or promotes suicide,” said TikTok in a statement that was published on CNN.

For TikTok, wading through the murky waters of violent and offensive content is little more than a surefire sign that you, as a company, have made it into the cultural zeitgeist.