Facebook announced this week that it would implement a ban on deepfakes, or altered videos, from its platform as the social network gears up for the 2020 election season. Deepfake videos, characterized by videos altered by artificial intelligence or other technologies to manipulate audiences, have caused controversy in recent months after altered videos of figures like Nancy Pelosi went viral on Facebook.

In the video of Pelosi, the Speaker of the House is seen talking with slurred speech as if she were drunk or had suffered from a stroke. The video went viral, but was soon determined to have been altered with a simple editing software—yet not simple enough for viewers to be able to easily determine that the video was not real. Even after it was publicized that the video was fake, the damage to Pelosi’s reputation had already been done.

Facebook’s ban, while potentially effective at setting a written standard for such content, has garnered controversy since it was announced over a loophole that would still let deepfakes intended as parody to be spread on the social network. “Cheapfakes” or parody videos would still be allowed, leaving it up to Facebook’s moderators to determine which content should and should not be spread on the platform.

Deepfakes Represent Major Value In Advertising

Such videos are useful for advertisers, who can alter models and advertisements with technology to create ads in different languages with the click of a button. In politics, however, they present a dangerous reality wherein disinformation can easily be spread with rapid fire.

For Speaker Nancy Pelosi, the spread of a fake video portraying her as unable to complete her job would be enough to discredit her work entirely. In other cases, a deepfake video could portray President Donald Trump declaring war on Iran or a Presidential candidate in the 2020 election saying something out of character to sway voters.

Deepfakes, which are part of a broad problem that Facebook is calling ‘manipulated media,” have been banned by the company in totality so long as they are not intended for parody or satire or are clearly labeled as artificial or altered media. Sounds confusing, right?

Facebook has a complicated ethical past.

In other words, the power to stop the spread of deepfakes is now at the hands of Facebook’s team of moderators, who are now responsible for drawing the line between parody and mass manipulation. In some cases, videos may not be removed by Facebook but given limitations on whether or not they can be spread after being fact-checked (in which case Facebook would leave the false video up, but label it as fact-checked and false).

Regardless, Facebook is leaving it up to human judgement to determine the power of artificial intelligence and technology, something that the company has already proven to be easier said than done. The company has long been criticized for its inability to control the spread of harmful information on its platform after it acknowledged its role in the Rohingya genocide, as well as how its platform has been used to fuel white nationalism and other hate crimes.

The 2018 shooting at a mosque in Christchurch, New Zealand was live streamed on the social network in real time. Since it’s the company’s policy that content will be moderated as it is flagged, the video remained on the social network for some time before it was removed—but not before it could be downloaded and re-published thousands of times, across dozens of websites around the world. For potentially harmful deepfakes, it would require Facebook moderators to flag each and every copy of the same video as fact-checked in order to be effective. Even then, it’s limited only to the videos that are spread directly on Facebook.

Deepfake ban is Facebook’s solution to a larger, more glaring problem

For deepfakes, and considering how quickly content can go viral in 2020, the time it may take Facebook’s moderators to determine whether or not content is both real and harmful could be more than enough time for it to spread to millions of people.

Experts have criticized Facebook’s ban on deepfakes as being an ineffective solution to avoiding the real problem of disinformation on the controversial social network. Since the policy only technically covers footage deliberately doctored through artificial intelligence or Photoshop to mislead viewers, it leaves content that has been spliced or edited to be misleading open to staying on the site without being fact checked. This is Facebook’s way of saying that viewers should, in theory, be able to tell themselves when content has been edited to manipulate audiences.

At its core, the ban on deepfakes is a temporary solution to appease critics that are looking to combat the larger issue of platform manipulation on social media. On the surface, Facebook has acknowledged the issue and addressed how it is created, but the policy has yet to touch on how, exactly, Facebook plans to avoid spreading manipulative information in the future.