Editor's Pick

Rite Aid used facial recognition on shoppers, fueling harassment, FTC says

The pharmacy chain Rite Aid misused facial recognition technology in a way that subjected shoppers to unfair searches and humiliation, the Federal Trade Commission said Tuesday, part of a landmark settlement that could raise questions about the technology’s use in stores, airports and other venues nationwide.

Federal regulators said Rite Aid activated the face-scanning technology, which uses artificial intelligence to attempt to identify people captured by surveillance cameras, in hundreds of stores between 2012 and 2020 in hopes of cracking down on shoplifters and other problematic customers.

But the chain’s “reckless” failure to adopt safeguards, coupled with the technology’s long history of inaccurate matches and racial biases, ultimately led store employees to falsely accuse shoppers of theft, leading to “embarrassment, harassment, and other harm” in front of their family members, co-workers and friends, the FTC said in a statement.

In one case, a Rite Aid employee searched an 11-year-old girl because of a false facial recognition match, leaving her so distraught that her mother missed work, the FTC said in a federal court complaint. In another, employees called the police on a Black customer after the technology mistook her for the actual target, a White woman with blond hair.

Rite Aid said in a statement that it used facial recognition in only “a limited number of stores” and that it had ended the pilot program more than three years ago, before the FTC’s investigation began.

As part of a settlement, the company agreed not to use the technology for five years, to delete the face images it had collected and to update the FTC annually on its compliance, the FTC said.

“We respect the FTC’s inquiry and are aligned with the agency’s mission to protect consumer privacy,” the company said.

Rite Aid’s system scanned the faces of entering customers and looked for matches in a large database of suspected and confirmed shoplifters, the FTC said. When the system detected a match, it would flag store employees to closely watch the shopper.

But the database included low-resolution images taken from grainy surveillance cameras and cellphones, undermining the quality of the matches, the FTC said. Those improper matches would then motivate employees to trail customers around the store or call the police, even if they had seen no crime take place.

Rite Aid did not tell customers it was using the technology, the FTC said, and it instructed employees not to reveal its use to “consumers or the media.” The FTC said Rite Aid contracted with two companies to help create its database of “persons of interest,” which included tens of thousands of images. Those firms were not identified.

The FTC said huge errors were commonplace. Between December 2019 and July 2020, the system generated more than 2,000 “match alerts” for the same person in faraway stores around the same time, even though the scenarios were “impossible or implausible,” the FTC said.

In one case, Rite Aid’s system generated more than 900 “match alerts” for a single person over a five-day period across 130 different stores, including in Seattle, Detroit and Norfolk, regulators said.

The system generated thousands of false matches, and many of them involved the faces of women, Black people and Latinos, the FTC said. Federal and independent researchers in recent years have found that those groups are more likely to be misidentified by facial recognition software, though the technology’s boosters say the systems have since improved.

Rite Aid also prioritized the deployment of the technology in stores used predominantly by people of color, the FTC said. Though roughly 80 percent of Rite Aid’s stores are in “plurality-White” areas, the FTC found that most of the stores that used the facial recognition program were located in “plurality non-White areas.”

The false accusations led many shoppers to feel as if they had been racially profiled. In a note cited by the FTC, one shopper wrote to Rite Aid that the experience of being stopped by an employee had been “emotionally damaging.” “Every black man is not [a] thief nor should they be made to feel like one,” the unnamed customer wrote.

The FTC said Rite Aid’s use of the technology violated a data security order in 2010, part of an FTC settlement filed after the pharmacy chain’s employees were found to have thrown people’s health records in open trash bins. Rite Aid will be required to implement a robust information security program, which must be overseen by the company’s top executives.

The FTC action could send ripple effects through the other major retail chains in the United States that have pursued facial recognition technology, such as Home Depot, Macy’s and Albertsons, according to a “scorecard” by Fight for the Future, an advocacy group.

Evan Greer, the group’s director, said in a statement, “The message to corporate America is clear: stop using discriminatory and invasive facial recognition now, or get ready to pay the price.”

FTC Commissioner Alvaro Bedoya, who before joining the FTC last year founded a Georgetown Law research center that critically examined facial recognition, said in a statement that the Rite Aid case was “part of a broader trend of algorithmic unfairness” and called on company executives and federal lawmakers to ban or restrict how “biometric surveillance” tools are used on customers and employees.

“There are some decisions that should not be automated at all; many technologies should never be deployed in the first place,” Bedoya wrote. “I urge legislators who want to see greater protections against biometric surveillance to write those protections into legislation and enact them into law.”

Joy Buolamwini, an AI researcher who has studied facial recognition’s racial biases, said the Rite Aid case was an “urgent reminder” that the country’s failure to enact comprehensive privacy laws had left Americans vulnerable to risky experiments in public surveillance.

“These are the types of common sense restrictions that have been a long time coming to protect the public from reckless adoption of surveillance technologies,” she said in a text message. “The face is the final frontier of privacy and it is crucial now more than ever that we fight for our biometric rights, from airports to drugstores to schools and hospitals.”

This post appeared first on The Washington Post

You May Also Like

Stock

Union members at Ford, Stellantis and General Motors have ratified a new 4½-year contract, locking in at 11% pay increases secured after a six-week...

Investing

ASX-listed Antilles Gold (ASX:AAU, OTCQB:ANTMF) is an Australian mining company focused on gold and copper projects in Cuba through joint ventures with the Cuban...

Editor's Pick

California Gov. Gavin Newsom announced Sunday that he was appointing Emily’s List President Laphonza Butler as the replacement to former senator Dianne Feinstein (D-Calif.),...

Editor's Pick

JERUSALEM — Iran launched a massive attack of more than 300 missiles and drones toward Israel late Saturday, a stunning assault that put the...

Disclaimer: investmentintellecthub.com, its managers, its employees, and assigns (collectively “The Company”) do not make any guarantee or warranty about what is advertised above. Information provided by this website is for research purposes only and should not be considered as personalized financial advice. The Company is not affiliated with, nor does it receive compensation from, any specific security. The Company is not registered or licensed by any governing body in any jurisdiction to give investing advice or provide investment recommendation. Any investments recommended here should be taken into consideration only after consulting with your investment advisor and after reviewing the prospectus or financial statements of the company.

Copyright © 2024 InvestmentIntellectHub.com