YouTube is upping its defense against hate. The company has long been removing videos that target people for their race and sexual orientation, but YouTube wants to take it a step further.
Back in June, Vox Media journalist Carlos Maza drew attention to harassment he was receiving from YouTube commentator Steven Crowder. Crowder repeatedly used racist and homophobic slurs aimed at Maza in his videos. At first, YouTube didn’t believe Crowder’s videos violated the company’s guidelines. However, YouTube eventually suspended his ability to earn ad money but still did not remove the videos. After the harassment controversy, YouTube promised to review its policies.
The company has come back six months later with a few new policies aimed at preventing similar situations. They outlined the changes in a blog post.
“We remain committed to our openness as a platform and to ensuring that spirited debate and a vigorous exchange of ideas continue to thrive here,” Matt Halprin, YouTube’s global head of trust and safety said. “However, we will not tolerate harassment and we believe the steps outlined below will contribute to our mission by making YouTube a better place for anyone to share their story or opinion.”
The Policy Changes
For starters, YouTube is expanding its anit-threat policy. It will now prohibit “veiled or implied threats,” as opposed to just direct threats. There is also a newly structured hate speech policy that forbids creators from insulting others “based on protected attributes such as their race, gender expression, or sexual orientation.” He also added that this new policy now includes public figures and public officials.
YouTube is also cracking down on repeat offenders. The company will not only remove repeat offenders’ content, but it will also suspend them from the YouTube Partner Program (YPP). The YPP is what allows creators to make money from their videos. So, if they are no longer a part of it, then they won’t be profiting anything.
Halprin ends the blog post by saying all of these new policies also apply to the comment section. The company will remove any comments that violate these policies. YouTube will also now allow creators to control comments by auto-flagging potentially inappropriate ones. Creators can be the ones to determine if the auto-flagged comments are appropriate or not.
“All of these updates represent another step towards making sure we protect the YouTube community,” Halprin said. “We expect there will continue to be healthy debates over some of the decisions and we have an appeals process in place if creators believe we’ve made the wrong call on a video.”
The Public’s Reaction
Haplrin was correct to assume these new changes could cause a debate. #YouTubeisoverparty is currently trending on Twitter because certain videos have already been removed, and people are not happy about it.
@PrisonPlanet tweeted: “It’s particularly egregious that YouTube is retroactively applying their new terms, so videos that were within the rules 6 months ago are now being deleted. Many people make a living off of YouTube and YouTube is screwing with their lives. #youtubeisoverparty”
The overall consensus seems to be that YouTube is trying to control what content creators can say. Many think YouTube is taking censorship to the next level.
@MikaylaEneria tweeted: “I’m sorry but this is the last straw. Does anyone work there [YouTube] that actually cares about the content creators and viewers??? Or is it all about advertisers and profit. These new rule changes are DISGUSTING & completely ruin everything YouTube used to be. #youtubeisoverparty”
These new policies have only just been implemented, so we’ll have to wait and see what the long-term effects are. However, if people continue to complain, then YouTube might need to take another look at its policy changes.