Big Brother Is Watching You With Facial Recognition AI

Big Brother surveillance by facial recognition AI
Photo: Pixino / DreamsTime

In 1949, George Orwell wrote his dystopian novel Nineteen Eighty Four that described a future society in which the government continuously monitored everyone’s actions and conversations. AI technology has now made that level of monitoring possible, and society needs to cope with the consequences.

Facial Recognition in China

Facial recognition is perhaps the AI technology with the most potential for abuse. The Chinese government is in the process of rolling out its Xueliang (Translation: Sharp Eyes) Project that is connecting security cameras on roads, buildings, and malls to track its 1.4 billion inhabitants. The goal is to stop criminal behavior and monitor dissidents.

On the somewhat amusing side, there are reports that Chinese authorities have used the technology to catch and shame jaywalkers and toilet paper thieves. On the scary side, there is evidence that the Chinese government is using the technology to monitor and increase the persecution of minorities (e.g., Tibetans, Uighurs) and religious groups (e.g., Falun Gong practitioners). Time magazine reported that the authorities harass people if the cameras catch them growing a beard, leaving by their backdoor, or wearing a veil.

Facial Recognition in the US

While few people in the US would be comfortable with a surveillance apparatus as broad as China’s, surveillance in the US is widespread and expanding, with both proponents and detractors.

By 2021, the top twenty US airports will be using facial recognition to screen all incoming international passengers. Here again, the detection of terrorists would be a benefit if it works. However, if it misidentifies law-abiding travelers as terrorists, it could result in inconvenience or worse, wrongful arrest, of innocent passengers. According to the Department of Homeland Security (DHS) data, DHS facial recognition systems erroneously reject as many as one in twenty-five travelers using valid credentials. At this rate, an error-prone DHS face scanning system could cause 1,632 passengers per day to be wrongfully delayed or denied boarding at New York’s John F. Kennedy (JFK) International Airport alone. Imagine having your vacation ruined because a facial recognition system incorrectly matches your face to a terrorist as you are checking in at the airport.

Police departments in the state of Maryland have used facial recognition software to identify people participating in protests who had outstanding warrants. Maryland police also used the technology to catch robbery and shooting suspects they had captured on camera during the commission of their crimes. The FBI has started using facial recognition technology to search driver license photos stored in the Department of Motor Vehicles databases in several states. In 2019, the US Government Accountability Office estimated that law enforcement networks include faces of over 640 million adults, which adds up to nearly two images for every adult in the United States that year.

Civil Rights Issues for Facial Recognition Technology

These uses of facial recognition technology have drawn the ire of civil rights advocates and several members of the US Congress. They have complained about the fact that individuals did not give permission to use their license photos in this manner.  In a rare show of bipartisanship, both Democratic and Republican members of the US House Oversight and Reform Committee condemned the way law enforcement has been using this technology. Still, as of this writing, no law has been passed to prevent it.

Mass shootings, especially in schools, have been on the rise in the US. At the Parkland, Florida high school, which was the site of a massacre in 2018 that killed fourteen students and three teachers, officials have installed 145 cameras with AI monitoring software. Even at Parkland, some students and parents have questioned the invasion of privacy as well as the software’s potential for mistaken identity.

Discrimination by Facial Recognition Software

Another problem with the use of facial recognition software is that most facial recognition databases are composed primarily of white males. As a result, facial recognition training produces systems that do well on white males do not perform as well as on women and people of color. NIST published a 2019 report evaluating 189 face-recognition systems from 99 different developers and found that many systems were ten to one hundred times more likely to falsely match a black or Asian face to a criminal than a white one. The use of these systems would likely result in authorities falsely identifying women and people of color as matches to faces of suspects far more often than white men. For example, if the systems used for terrorist recognition contain discriminatory biases, minorities may be more likely to be falsely detained or arrested.

Unreliability of Facial Recognition Software

Facial recognition systems also can be fooled by noise in images. People can look at two images and determine if it is the same person, even if the two images have minor distortions such as graininess or extraneous pen marks. But facial recognition systems are easily fooled by these same distortions. Suppose you were to show images of two different people to a facial recognition system, and both images had similar distortions. The facial recognition system would likely incorrectly decide that it is the same person just because the images have the same distortions.

Facial Recognition Social Advocacy

Amazon sells commercial facial recognition software named Rekognition to companies and law enforcement agencies. Many other companies offer similar software including Google, Microsoft, and IBM. In 2018, a group of forty organizations led by the American Civil Liberties Union (ACLU) sent a letter to Amazon CEO Jeff Bezos demanding that Amazon stop the sale of its facial recognition technology to government organizations. The primary concern is that people should be free to walk down the street without 1984-style government surveillance. A non-profit organization, Ban Facial Recognition, offers an online interactive map of where facial recognition technology is being used in the US by police departments and airports. It also shows where municipalities have enacted laws against the use of facial recognition. In addition to the possibility of incorrect identification, the Ban Facial Recognition website argues that the use of this technology violates the US Fourth Amendment right against search without a warrant and that it supercharges discrimination.

To drive this point home with lawmakers, in 2019, the ACLU submitted images of members of the US Congress to Amazon’s Rekognition software. They found that the software falsely identified faces of twenty-eight members as matches to faces of criminal suspects. Worse, forty percent of the members incorrectly identified were people of color even though eighty percent of Congressional members are white. The ACLU issued a position paper on surveillance technology in which it expressed concern that the technology has the potential to “worsen existing disparities in treatment suffered by people of color and the poor by embedding, amplifying, and hiding biases.

Facial Recognition Legislation

As of this writing, bills are pending in both the US Congress and in many state and local government legislatures to ban the use of facial recognition technology in law enforcement. The State of California already enacted a three-year ban on the use of facial recognition in police body cameras starting in 2019. There is a bill pending the US Congress to ban the use of facial recognition in public housing. And the European Union is considering a five-year ban on the use of facial recognition software. In 2020, Amazon, Microsoft, and IBM all decided to at least temporarily suspend sales of facial recognition software to law enforcement agencies.

Conclusions

The bottom line here is that mistakes by facial recognition systems can be inconvenient or worse. No one wants to be put in jail or detained at an airport because a facial recognition system incorrectly matched their image to that of a terrorist or criminal. As a society, we must weigh the benefits of catching terrorists and criminals against the individual consequences of facial recognition errors. When data issues cause this to happen more frequently to minorities, the issue is worse. It becomes discrimination, which can only be fixed either by banning the use of facial recognition systems in law enforcement, or by ensuring that training tables are free from bias.

Surveillance systems using facial recognition also can become an invasion of privacy. Across the board, we need to find the right balance between catching terrorists and criminals and respecting the privacy of citizens.


This article first appeared in Grit Daily on 11/13/2020.

2022-11-10T13:07:45-05:00November 13th, 2020|0 Comments

Leave A Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Go to Top