AI brain

Rite Aid is banned from using AI-powered facial recognition software for five years as part of an agreement with federal regulators for what they called the giant pharmacy’s “reckless use” of the technology.

The Federal Trade Commission (FTC) this week announced the agreement, saying that the facial recognition systems Rite Aid had in place between 2012 and 2020 to identify shoplifters and others who caused problems in their stores was inaccurate, leading to thousands of misidentifications and false accusations that humiliated consumers and subjected them to harassment.

This was particularly a problem for people of color and women, according to the FTC.

The case touches on many of the issues – from privacy and data security to mass surveillance and the accuracy of such systems – that have been raised about the controversial use of facial recognition technology used by organizations, including government and law enforcement agencies and retailers.

The FTC in May warned companies that the agency was putting a focus on the use of biometric systems for identifying customers, saying the growing utilization of such technologies – not only facial recognition but also others, like iris and fingerprint scanning – was raising more issues about consumer privacy and civil rights.

The Rite Aid case was only the latest example of this.

“We often talk about how surveillance ‘violates rights’ and ‘invades privacy,’” FTC Commissioner Alvaro Bedoya wrote in a statement after the commission voted 3-0 in the Rite Aid case. “We should; it does. What cannot get lost in those conversations is the blunt fact that surveillance can hurt people.”

That’s especially true for certain sections of the population, Bedoya said, adding that “it has been clear for years that facial recognition systems can perform less effectively for people with darker skin and women.”

That was a particular problem for Rite Aid. The company put 60% of the systems in stores located in areas where the population was more likely to be people of color, even though 80% of Rite Aid stores are in white-plurality communities, the FTC said.

A Pile of Problems

The FTC found a range of problems with Rite Aid’s use of the facial recognition technology, according to the 54-page compliant the regulators filed. That included the pharmacy chain failing to take steps to mitigate potential risks to consumers, particularly those at a higher risk of misidentification. The agency said the technology was more likely to generate false-positives in stores located in mostly Black or Asian communities than in plurality-White neighborhoods.

The company never tested or asked about the accuracy of the technology before or after deploying it. Rite Aid used facial recognition technology from two unnamed vendors, the FTC said. The systems also used low-quality images, increasing the likelihood of false-positive matches, and the company inadequately trained employees who ran the systems.

The vendors created a facial recognition database that included people suspected of shoplifting or other crimes in a store and added not only their photos but also other data about the people, including names, birth dates and criminal backgrounds

“The company collected tens of thousands of images of individuals, many of which were low-quality and came from Rite Aid’s security cameras, employee phone cameras and even news stories,” the FTC wrote in its complaint.

Issues that Need Fixing

Under the new order, the pharmacy has to create safeguards to protect consumer when it uses biometric technologies, delete and have third parties delete images collected by the system, notify consumers when their personal information is put into a database connected to a biometric system, delete bioinformation it collects within five years, and let consumers know when a store is using the biometric technologies.

Rite Aid operates more than 2,300 retail pharmacies in 17 states and has a workforce of more than 50,000. In a statement, executives said they were pleased that an agreement with the FTC was reached and agree with the agency about the need to protect consumer privacy.

“However, we fundamentally disagree with the facial recognition allegations in the agency’s complaint,” they wrote. “The allegations relate to a facial recognition technology pilot program the Company deployed in a limited number of stores. Rite Aid stopped using the technology in this small group of stores more than three years ago, before the FTC’s investigation regarding the Company’s use of the technology began.”

The agreement now needs to be approved by the bankruptcy court overseeing Rite Aid’s restructuring efforts and the U.S. District Court.

Technology Raises Privacy, Fairness Concerns

The case is sure to feed into the ongoing debate between retailers and consumer-protection groups that want to ban the broad use of facial recognition software. It’s a fast-growing market, with Allied Market Research predicting it will expand from $5.5 billion last year to $24.3 billion by 2032, driven not only by retailers but also other sectors, including law enforcement, banking, border control and health care.

In his statement, FTC Commissioner Bedoya said that while the agreement with Rite Aid is strong – and is only a baseline for what’s required in such environments – there is a “strong argument” that there are many technologies that should never be deployed in the first place” and go beyond just surveillance.

“Indeed, the harms uncovered in this investigation are part of a much broader trend of algorithmic unfairness – a trend in which new technologies amplify old harms,” he said, noting that such harm is seen in such areas as hiring and housing. “Algorithmic unfairness is pernicious. It hurts people invisibly and at scale. And it tends to harm people because of who they are, reifying patterns of discrimination deeply embedded in our nation’s history. Algorithmic unfairness hurts people who are already hurting.”