Senator Reverend Warnock, Colleagues Applaud Enforcement Against Rite Aid’s Discriminatory Use of Facial Recognition Technology

Senator Reverend Warnock, lawmakers: “The Rite Aid complaint and settlement illustrate the unique threats that facial recognition and other biometric identification systems pose for Black communities, communities of color, and low-income individuals”

ICYMI: Politico – Washington takes aim at facial recognition

Washington, D.C. – Today, U.S. Senator Reverend Raphael Warnock (D-GA), a member of the Senate Commerce committee, Senator Markey (D-MA), and 10 of his Senate colleagues applauded the Federal Trade Commission (FTC)’s recent enforcement action against Rite Aid for its use of discriminatory and invasive facial recognition technology. The Senators also urged the FTC to use every tool available to continue robust enforcement to protect consumers and prevent discriminatory surveillance, which poses dangerous threats to people’s privacy and civil liberties. 

“The Rite Aid complaint and settlement illustrate the unique threats that facial recognition and other biometric identification systems pose for Black communities, communities of color, and low-income individuals. As locations in every sector — from pharmacies to amusement parks to sports stadiums — increasingly employ facial recognition systems, the FTC has a responsibility to use every available tool to protect consumers and prevent discriminatory surveillance,” the Senators wrote in their letter to Chair Lina Khan. “As facial recognition technology proliferates across industries, we encourage the FTC to continue its recent robust enforcement using the full range of its regulatory authority.”

In the Rite Aid settlement with the FTC, the Commission uncovered that the company’s use of facial recognition systems in its pharmacies led to thousands of false identifications, resulting in individuals being wrongfully searched, accused of shoplifting, and even expelled from Rite Aid stores. Rite Aid did not disclose its use of facial recognition technology and even discouraged employees from informing customers of its use. Finally, the Commission found that the company disproportionally deployed facial recognition tools in neighborhoods with a plurality of people of color.

This letter follows Senator Warnock’s recent effort with Majority Whip Durbin and 16 other Senators raising concerns with the Department of Justice (DOJ) that funding facial recognition software, which can be inaccurate and unreliable, may lead to violations of Title VI of the Civil Rights Act, which prohibits “discrimination under any program or activity receiving Federal financial assistance.” Studies have shown facial recognition tools misidentify people of color and women at higher rates than white, male faces, and false matches can lead to real harm, including wrongful searches, removals from businesses, and even arrests.

In addition to Senators Warnock and Markey, the letter was signed by U.S. Senators Peter Welch (D-VT), Bernie Sanders (I-VT), Elizabeth Warren (D-MA), Ron Wyden (D-OR), Alex Padilla (D-CA), Jeff Merkley (D-OR), Tina Smith (D-MN), Mazie Hirono (D-HI), Ben Cardin (D-MD), and Laphonza Butler (D-CA).

The letter can be found HERE and text is below:

Dear Chair Khan:

We write to commend the Federal Trade Commission (FTC) on its recent action against Rite Aid for its discriminatory and invasive use of facial recognition technology in its pharmacy stores. The Rite Aid complaint and settlement illustrate the unique threats that facial recognition and other biometric identification systems pose for Black communities, communities of color, and low-income individuals. As locations in every sector — from pharmacies to amusement parks to sports stadiums — increasingly employ facial recognition systems, the FTC has a responsibility to use every available tool to protect consumers and prevent discriminatory surveillance. As facial recognition technology proliferates across industries, we encourage the FTC to continue its recent robust enforcement using the full range of its regulatory authority.

The continued proliferation and unchecked use of facial recognition technology poses serious risks for individual privacy and civil liberties. Biometric information — such as one’s fingerprints, facial vector, or iris scan — is sensitive data, and once misused can create long-term harmful impacts. Studies show facial recognition algorithms misidentify people of color and women at higher rates than white, male faces. Although vendors have reported that their facial recognition systems have become more accurate, recent tests from the National Institute of Standards and Technology show that accuracy is still lower on images that are low quality, blurry, obscured, or taken from the side or in poor light — images that are often used as reference images for facial recognition systems at public stores, venues, and other establishments.

Beyond inaccuracy, facial recognition tools also deeply invade individual privacy, already causing a serious impact on communities of color and low-income individuals. These systems are more likely to be deployed in Black, Brown, immigrant, and low-income communities, contributing to increased surveillance, over-policing, and interference with individual civil rights. For example, individuals who believe they are being surveilled are less likely to engage in activities protected by the First Amendment.  Law enforcement officers have also falsely arrested at least six individuals due to an inaccurate facial recognition match, all of whom were Black. In 2021, a Black teenager was barred from entering a skating rink due to a false identification match.  And a recent report uncovered a 2022 incident where police used false facial recognition results to arrest an individual, who was then assaulted while detained.

Although law enforcement’s use of facial recognition systems is deeply concerning, facial recognition tools have continued to proliferate in other aspects of our society — often to the surprise of the public. Grocery stores, retail chains, stadiums, airports, amusement parks, housing developments, and even schools have all begun to use these surveillance tools, often without notice to or the consent of those impacted by the technology. The lack of federal standards around the use of facial recognition technology, especially in places of public accommodation, means people are increasingly unable to move, assemble, or appear in public spaces without being tracked and identified. Additionally, the use of facial recognition technology can contribute to discriminatory denials of service in places of public accommodation, preventing people from receiving vital services such as prescriptions.

The FTC has rightfully been aggressive in addressing companies’ discriminatory or invasive use of facial recognition systems, particularly with the Rite Aid settlement. For more than a decade, the Commission has been providing guidance on best practices related to uses of facial recognition technologies. Additionally, over the past five years, the Commission has taken strong steps to crack down on misleading and invasive use of facial recognition systems. In 2019, for example, the FTC fined Facebook $5 billion for a variety of privacy violations and set requirements that the company provide “clear and conspicuous notice” of its use of facial recognition. In 2021, the FTC required Everalbum — a photo app that misled its customers about its use of facial recognition — to obtain express consent before using biometric identifier tools. And last year, the FTC released a policy statement advising that the Commission “is committed to combatting unfair or deceptive acts and practices related to the collection and use of consumers’ biometric information and the marketing and use of biometric information technologies.”

Most importantly, in the FTC’s recent enforcement action against Rite Aid, the Commission found that Rite Aid’s use of facial recognition systems in its pharmacies led to thousands of false identifications, resulting in individuals being wrongfully searched, accused of shoplifting, and even expelled from stores. Furthermore, Rite Aid did not disclose its use of this technology and even discouraged employees from informing customers of its use. Finally, Rite Aid disproportionally deployed facial recognition tools in neighborhoods with a plurality of people of color. This invasive and discriminatory use of facial recognition technology is unacceptable. The Commission’s action against Rite Aid is an important signal that it is closely watching the use of facial recognition systems in public spaces.

Although FTC enforcement does not replace congressional action to safeguard the public’s privacy and civil liberties, we commend the Commission for its work to address facial recognition technologies’ threat to communities of color. Companies should not be surprised by these enforcement actions, as the risks and harms of facial recognition are well-known. The public cases of misidentification and discriminatory deployment demonstrate that the technology has a disproportionate impact on communities of color and low-income individuals. The FTC is right to use its jurisdiction and powers to prevent these threats to our civil rights and liberties. We urge the Commission to continue to take all necessary steps, including robust enforcement and investigatory measures, to combat these harms, and we stand ready to assist you with this work.

Thank you for your attention to this important matter.

###

Print
Share
Like
Tweet