Governments Use AI Facial Recognition Tech: Is It Legal?

Published on February 19, 2020

It seems that with each headline we read and technological advancement we make, we are inching closer to the dystopian future George Orwell tried to warn us about. Artificial Intelligence in general is a controversial topic amongst those developing the technology with myriad opinions abound regarding its necessity. One of the more frightening subsects of AI is facial recognition technology, more specifically, the government’s use of that tech to police its citizens.

CCTVs that identify and track the population sounds like technology ripped right out of the pages of 1984, but they are very real. Big brother is already watching you, and they are working on keeping an even closer eye on you than before.

How Does Facial Recognition Technology Work?

Facial recognition technology is made up of two key components. Firstly, it requires cameras which can scan individual faces and break those faces down to a set of facial landmarks, or nodal points, in real time. It then requires a database of images for the system to compare those nodal points to, alerting the system operator when a match is found. 

Both elements of facial recognition surveillance system are controversial in their own rights. The efficacy of the cameras and the risk of false-positives is one major part. Another, is the means in which these databases are compiled. As a whole, the ethics of the entire system is the center of a large debate.

Clearview AI Put Together a Database Using Controversial Social Media Scraping Technique

Clearview AI is a name that has broken into headlines recently and is shrouded in controversy. The group has partnerships with 600 law enforcement agencies across the United States and Canada but was not publicly known until a New York Times expose in January. The secrecy surrounding the organization is just the tip of the iceberg.

Clearview AI’s domain in facial recognition technology is the database of images and nodal points that they have compiled. Their revenue is generated by offering law enforcement agencies paid access to said database. What makes Clearview AI different, is that they compiled this information using social media and a controversial technique known as scraping.

Scraping is the practice of mining data from publicly available sources on the internet such as social media platforms. The scraped data is then compiled for whatever purpose the party was trying to fulfil by scraping. This practice is controversial because the user’s private data is not being used as intended. Microsoft’s LinkedIn tried to prevent a data scraping firm from utilizing their platform, but was shut down in a Federal Court of Appeals. As of right now, there is no legal precedent preventing the scraping of public websites.

There has already been pushback against Clearview AI by the big three of social media. Twitter, Facebook, and YouTube have all sent cease-and-desist letters to the organization to stop using the platforms to fill their databases. Clearview AI says that it is their First Amendment right to scrape public platforms in what might be one of the biggest stretches of the First Amendment to date. 

Facial Recognition Technology is Currently Being Used, Not Always for Good

Clearview AI has claimed that their technology has already led to the arrest of a terrorist suspect in New York City. NYPD has denied this claim, stating that while they did indeed use facial recognition technology, they did so using images obtained legally.

One in four police departments in the US has access to the technology, most of which use Amazon’s Rekognition. Rekognition’s primary function is the creation of the nodal points and analyzing footage against a database. Most police departments in the US use jail booking photos as the foundation for their databases.

This is also the case in England. London police are deploying CCTV cameras throughout the city with the primary function of facial recognition. Scotland Yard’s database will be made up of individuals wanted for violent crimes or other offenses deemed “serious”. This would then send a notification to police officers who would stop the suspect and investigate them. Essentially creating a system of digitally enabled stop-and-frisk. 

The use of facial recognition in China is a major cautionary tale for the technology’s dark side. The Chinese government has employed facial recognition to track the movements of the oppressed Uighur population. This passive, constant surveillance is what critics of governmental employment of facial recognition technology warn against.

We are in a Murky Transition Period with Facial Recognition Technology

What we have seen so far with the controversial AI technology is that it is very easy to slip into legal gray areas that are ethically questionable. Industry leaders like Microsoft, IBM, and Amazon all favor regulations being put in place to control the development of the technology. 

The EU was considering banning the technology for a period of five years until it can be further developed and ethical regulations can be put in place. They have since backtracked from this stance

The Trump administration is in favor of a hands off approach to regulation that would prevent “needlessly hamper[ing] AI innovation and growth.” Said regulations would only be applied to the private sector to begin with, which would address issues with companies like Clearview AI, but would not prevent oppressive uses of the technology.

It is disconcerting that the White House uses China as a benchmark in the AI race. We hope that the spirit of competition does not cloud judgement in regards to putting in place ethical regulations for the new and powerful tool.

Justin Shamlou is a Senior Staff Writer at Grit Daily. Based in Miami, he covers international news, consumer brands, tech, art/entertainment, and events. Justin started his career covering the electronic music industry, working as the Miami correspondent for Magnetic Mag and US Editor for Data Transmission.

Read more

More GD News