FTC bans Rite Aid from using AI facial recognition for 5 years
After investigating its business practices, the Federal Trade Commission alleges that the pharmacy chain did not sufficiently protect its consumers’ digital privacy.
Pharmacy chain Rite Aid was banned from utilizing commercial artificial intelligence facial recognition by the Federal Trade Commission after the store failed to comply with the current “reasonable safeguards” to prevent automated systems from harming consumers.
In an injunction filed against Rite Aid on December 19, the FTC claims that the company did not do enough to secure affirmative consent from consumers interacting with the automated biometrics used across Rite Aid stores. These biometric softwares collected sensitive consumer data and protected information without adequately requesting permission from consumers, leaving some customers erroneously accused of theft and wrongdoing within the stores.
Approximately tens of thousands of customers were mislabeled as “persons of interest” due to the biometric software generating false positive matches with people who were originally enrolled in Rite Aid’s customer database from its collected images. In addition to relying on the images captured by the automated biometric video surveillance, other pictures used to identify people were sourced from employee phone cameras and news stories.
As a result of the FTC’s investigation, the chain will be banned from employing biometric facial recognition software for five years.
“Rite Aid's reckless use of facial surveillance systems left its customers facing humiliation and other harms, and its order violations put consumers’ sensitive information at risk," said Samuel Levine, director of the FTC’s Bureau of Consumer Protection, in a press release. “Today’s groundbreaking order makes clear that the Commission will be vigilant in protecting the public from unfair biometric surveillance and unfair data security practices.”
Following the FTC’s announcement, Rite Aid denied wrongdoing. The company issued a statement on December 19 as well, announcing their settlement with the federal government but asserting that these biometric surveillance systems were only installed in a “limited number of stores” as a pilot program.
“We are pleased to reach an agreement with the FTC and put this matter behind us,” the press release said. “We respect the FTC’s inquiry and are aligned with the agency’s mission to protect consumer privacy. However, we fundamentally disagree with the facial recognition allegations in the agency’s complaint. The allegations relate to a facial recognition technology pilot program the Company deployed in a limited number of stores. Rite Aid stopped using the technology in this small group of stores more than three years ago, before the FTC’s investigation regarding the Company’s use of the technology began.”
The company continued by pledging to safely service the communities it serves.
“As part of the agreement with the FTC, we will continue to enhance and formalize the practices and policies of our comprehensive information security program,” the release concluded.
The FTC’s posture towards enforcing the safe deployment of emerging systems, particularly those in the automation and biometric industries, has emphasized safety in recent years. Prior to the complaint against Rite Aid, the Commission voted to streamline investigations into the misuse of AI softwares, and pressed charges against fertility app company Premom for allegedly violating user privacy policies.