Press "Enter" to skip to content

Facebook Live’s Wild West: The Moderation Challenges of Live Video

The attack on two mosques in New Zealand eleven days ago has dominated headlines across the globe and shocked many due to its horrific nature.  In less than ten minutes, a gunman with five guns and a camera strapped to his chest shot hundreds of rounds, killing over fifty people in Christchurch. The terrorist attack has been described as the worst mass shooting in New Zealand’s history.

In the wake of the shooting, it was discovered that the gunman broadcasted himself on Facebook Live mercilessly shooting innocent Muslims gathered for prayer. His efforts to spread his white supremacist ideology were not limited to Facebook, and it was revealed that he had posted a hateful manifesto across several online platforms before the shooting.

While nobody initially reported the video during its live broadcast, it had already spread across the internet and uploaded countless times to Twitter, YouTube, and other sites before eventually being deleted by Facebook. According to Facebook, over 1.5 million videos of the attack were removed globally within the first 24 hours on their platform alone.

This attack is the latest in a string of incidents that highlight Facebook’s difficulty with content moderation, specifically video tools such as Facebook Live.

What Is Facebook Live, Exactly?

Facebook Live is a tool that allows any Facebook user to broadcast content publicly or to their friends from their phone. Followers get notifications when the user goes live and can follow along, sharing comments and their reactions in real time. Now nearly four years old, it’s widely used as a broadcasting tool by everyone from the White House and celebrities to your best friend showing off their latest concert experience.

Launched in August 2015, Facebook Live had an initial rollout to high-profile users and public figures. Its global launch; however, was expedited by Facebook CEO Mark Zuckerberg himself. Once he saw the data that people were watching live streams three times longer than standard video content, Zuckerberg became enamored with the feature. According to the Wall Street Journal, Zuckerberg put more than 100 employees under “lockdown” in February 2016 to focus exclusively on its global rollout.

At first glance, his method worked. By April 2016, all Facebook users had the ability to use the live tool.  In the tool’s initial announcement release, Zuckerberg announced the release with a Facebook post, saying:

“Anyone with a phone now has the power to broadcast to anyone in the world. When you interact live, you feel connected in a more personal way. This is a big shift in how we communicate, and it’s going to create new opportunities for people to come together.”

Facebook portrayed its new Live tool as a harmless feel-good way to broadcast moments in a user’s life that feel significant to them. Commercials aired with users sharing moments of them dancing or their visit to Niagara Falls.

It was clear at the time of launch that Facebook did not foresee the issues to come with making such a tool accessible to over a billion people worldwide.

Controversy and Content Moderation

Despite the fanfare, Facebook Live’s start was anything but picture-perfect. In fact, the tool has faced controversy since its inception, mostly due to the ability for any user to upload videos at any time with little censorship and oversight.

In July 2016, a Minnesota woman, Diamond Reynolds, live broadcasted her boyfriend, Philando Castile dying after being shot by a police officer at a traffic stop. Months later, a man in Thailand broadcasted his suicide by hanging on Facebook Live, and the video was not immediately removed. In April 2017, three shootings were broadcast on Facebook Live in a span of two days.

While Facebook refuses to release an official number, in 2017, Fortune estimated that over 50 violent events had been broadcasted on Facebook Live, and that figure is just two years old.

Beyond the acts of violence mentioned above, at the forefront of the debate over live social media broadcasting tools is their use in acts of terrorism. With the advent of social media, terrorists and extremists around the globe now have new tools at their disposal to recruit others, showcase their acts, and spread extremist ideology.

While all of these things are surely against any social platform’s community guidelines, many are asking: Who is taking this content down and how quickly can they do so?

The difficulty with censoring live video content is staggering when one considers the sheer amount of live broadcasts being uploaded each minute around the world. According to Fidji Simo, Facebook’s Director of Product Management, there have been over 3.5 billion live broadcasts since Facebook Live’s inception, and that number is from just last year. If you do the math, that’s millions of videos being posted each day. Can a content team working around the clock reasonably censor violent live content instantaneously?

This is a problem that Facebook has consistently had as its grown into a worldwide community, and it goes beyond the Facebook Live tool. According to the Wall Street Journal, Facebook has some 15,000 content reviewers, and the number continues to grow.  How can 15,000 reviewers monitor millions of videos and images being posted online by billions of people?

Another aspect of Facebook’s content moderation that has been called into question is their use of AI software. Wednesday evening, Facebook acknowledged that their AI software used in content moderation had failed to catch the New Zealand gunman’s live broadcast. While Facebook stated that they are actively working on improving this technology, many think it’s too little, too late.

New Zealand’s Impact

According to Facebook, there were less than 200 views of the New Zealand gunman’s live broadcast, and 4,000 views total before the video was removed. What’s difficult for many to comprehend is that nobody reported the live broadcast, and that Facebook only removed the video when alerted by New Zealand police.

The damage had been done. In the past week, Google, YouTube, Twitter, and other sites have worked tirelessly to remove millions of original and edited copies of the footage. Facebook stated that they had created a digital fingerprint of the initial livestream, which helped with automatic removals and ensure that a majority of the videos were blocked before being publicly posted.

This development only further highlights the issue of extremists using live broadcasting to connect with other like-minded people online. This incident isn’t the first time that people have used such tools to broadcast acts of terrorism. In 2016, a French terrorist killed a police captain and his partner, later broadcasting on Facebook Live speaking about the crime.  

New Zealand’s prime minister Jacinda Ardern has called for accountability of these companies, joining other world leaders in the wake of the shooting demanding more content regulation:

“We cannot simply sit back and accept that these platforms just exist and that what is said on them is not the responsibility of the place where they are published. They are the publisher. Not just the postman.

In the United States, the Chairman of the House Homeland Security Committee, Rep. Bennie G. Thompson, has called on Facebook, YouTube, and other companies to brief Congress on their response to the shooting. “I must emphasize how critically important it is for you to prioritize the removal of this sensitive, violent content,” Thompson wrote.

Facebook Live’s Future

New Zealand’s shooting isn’t the first act of violence to be broadcast on social media, and it probably won’t be the last.

It’s unclear what next steps Facebook will take long-term aside from their current work to re-evaluate content moderation. Some have called for Facebook Live content to be time-delayed, while many have called for the tool to be removed altogether.

There’s no doubt that Facebook Live brings people together and can be a positive tool for marketing, broadcasting live events, and sharing important life moments. When tools meant for public use are utilized by extremists and others looking to broadcast acts of violence, it hurts the online community and traumatizes viewers.

In the “Black Mirror” episode “White Bear”, a woman is followed around a park by people trying to kill her while others film the situation on their phones, ignoring her cries for help. The episode was well received by critics, with one writing: “It aims to ask: To what extent can you stand by and watch horror before you are complicit [and] punishable?”

This past week it seems that life has imitated art, and that the reality is worse than we could ever imagine.

© Grit Daily LLC 2018-2021