Sundance Documentary ‘Coded Bias’ Demonstrates The Dangers Of Racial Bias In Facial Recognition Softwares

Published on January 30, 2020

In 2017 reports that Apple’s facial recognition software had trouble recognizing Asian faces began to surface. The software, which lets users unlock their smartphones with a single glance at its front-facing camera, was one of the first of its kind for everyday use of facial recognition softwares. Though groundbreaking, it still had some kinks to work through—namely, in that the software was not trained to recognize the faces of people of color.

While Apple’s problems demonstrated a minor inconvenience, the ugly truth is that the problem is much more widespread than it seems to be at first glance. While it may be an inconvenience that your iPhone can’t recognize your face, the problem becomes much bigger when government-issued facial recognition software has trouble differentiating between a known criminal and someone else.

Coded Bias Demonstrates A Need For Legislation To Protect People From Facial Recognition

A new documentary by director Shalini Kantayya that premiered this week at the Sundance Film Festival aims to dig deeper into how racial bias in facial recognition technology is a much bigger problem than it may seem. “Coded Bias” follows the stories of several experts in facial recognition technology—from those that work with the technology every day to those that fight for the rights of the civillians exposed to its problems.

The documentary follows Joy Buolamwini, a researcher at MIT that discovered by accident that facial recognition softwares have trouble recognizing black faces. Through her research, Buolamwini soon discovers major discrepancies in the types of faces that these softwares can and cannot recognize.

More specifically, she finds that they have no trouble recognizing white, male faces, but have an increasingly hard time recognizing the faces of people of color or women. Buolamwini’s face, for example, is unrecognizable by the technology until she puts on a blank, white mask.

In Buolamwini’s projects, which lie at the intersection of art and technology, the problem is an inconvenience that only adds time and work. But what happens when these technologies are being used by governments? Or in public spaces like music festivals and airports? If the technology cannot recognize a black face as it sits directly in front of the camera, how can it differentiate between a known threat in an airport and an innocent civilian?

In the UK, where this technology is already being used in public spaces by the police, the problem demonstrates how racial bias’ has repeatedly, mistakenly targeted people of color by matching their faces with the faces of known criminals.

In one scene, police interrogate a teenage boy on the street after a facial recognition software mistakes him for an adult with a criminal record. But the heart of the problem is this: even if the facial recognition software worked correctly, would that man with the criminal record not be entitled to basic freedoms like going out in public?

A Global Problem

In China, the social credit score system has proven to be effective at keeping Chinese citizens from exhibiting bad behavior in public spaces. Facial recognition, which citizens use when using public transportation or checking out in a grocery store, tracks people’s every move to be sure that they are behaving well.

Good behavior is awarded with good social credit, bad behavior is punished by taking away points from your score. Socially, the comfort with technology has created a society that relies on these scores to do things like make friends, trust people, or even get jobs. A bad credit score will do more than impact your ability to travel, it can make or break your lifestyle and social group.

In The United States

In the United States, citizens are promised more freedom than that—or so they think. The use of facial recognition softwares and algorithms has been proven to have a racial bias. But that isn’t the fault of the technology, so much as it is the fault of the people that make it and the data they input. If the people programming algorithms and softwares are mostly male and mostly white, the softwares and algorithms are going to have a hard time differentiating between women and people of color.

These are the same technologies that are already used by companies to hire new employees and sift through applications or by governments to keep track of where its people are. In Coded Bias, stories of employees that have had their jobs put at risk over these algorithms question whether the technology was correct to target them in the first place.

For this reason, Coded Bias serves as a pivotal educational tool for helping citizens understand how their rights are being infringed upon by the same technologies they use every day. Using a diverse set of interviews and examples on how facial recognition software and sorting algorithms are a danger to society, the documentary provides a major case for why legal intervention is pivotal in creating an equal environment for all.

Coded Bias is currently seeking distribution rights.

Julia Sachs is a former Managing Editor at Grit Daily. She covers technology, social media and disinformation. She is based in Utah and before the pandemic she liked to travel.

Read more

More GD News