Half of American Adults Are in Police Facial-Recognition Databases
Cities and states are investing in biometric scanning technology, with few laws in place to restrict what they can do with it.
If you’re reading this in the United States, there’s a 50 percent chance a photo of your face is in at least one database used in police facial-recognition systems.
Police departments in nearly half of U.S. states can use facial-recognition software to compare surveillance images with databases of ID photos or mugshots. Some departments only use facial-recognition to confirm the identity of a suspect who’s been detained; others continuously analyze footage from surveillance cameras to determine exactly who is walking by at any particular moment. Altogether, more than 117 million American adults are subject to face-scanning systems.
These findings were published Tuesday in a report from Georgetown Law’s Center for Privacy and Technology. It details the results of a year-long investigation that drew upon more than 15,000 pages of records obtained through more than 100 freedom-of-information requests.
The study’s authors—Clare Garvie, Alvaro Bedoya and Jonathan Frankle—attempted to fill in large gaps in public knowledge about how facial-recognition technology is used, and the existence of policies that constrain how police departments can use it. Some details about the FBI’s use of facial scanning were previously known, but the scale of local and state law-enforcement involvement is only now starting to come to light.
Facial recognition is fundamentally different from other types of searches, the authors contend—and not just because it makes it easy for police to track people by their physical features, rather than by keeping an eye on their possessions and technology, like a smartphone, a house, or a car.
For one, it allows officers to track large groups of people who aren’t necessarily suspected of committing a crime. Courts haven’t determined whether facial recognition constitutes a “search,” which would limit its use under the Fourth Amendment, so many departments use it on the public indiscriminately. (This is true also for technologies that track a smartphone’s location, for example.)
What’s more, in order for a facial-recognition system to work, there needs to be a database for it to check against. If a police agency wants to know the guy caught holding up a bank in a surveillance photo is John Doe, it needs to already have a photo of John on file. If the surveillance footage is good enough, the recognition algorithm can then determine the probability that the face in the photo is the same as the one in John’s driver's license portrait.
For it to be possible to identify people this way requires importing many millions of ID photos of innocent people into lookup databases. According to the report, 80 percent of the photos that appear in the FBI’s facial-recognition network are of non-criminals. Only 8 percent show known criminals.
“Never before has federal law enforcement created a biometric database—or network of databases—that is primarily made up of law-abiding Americans,” the report says.
Many of the various local and state police departments that have access to these databases have few checks on how they use them. Only five states have any laws that touch on how law enforcement can use facial recognition, and none of them take on more than one aspect of the issue, the report found.
That means some departments have gotten away with patently absurd uses of the technology: In Maricopa County, Arizona, the sheriff’s office—led by a famously combative and anti-immigrant sheriff—downloaded every driver’s license and mugshot from every resident of Honduras, provided by the Honduran government, to its facial-recognition database.
Departments that use the technology in a more straightforward way can still be stymied by the inaccuracies and biases that often plague facial-recognition algorithms. They’ve been found to perform more poorly on African-American faces than on other races, which can make it more likely that a system will misidentify an innocent black person as a suspect. And because African-Americans are disproportionately likely to be arrested—and thus show up in mug-shot databases—systems that use booking photos will be more likely to flag an African-American face than a Caucasian one.
At the end of a long thread of tweets highlighting the report’s findings—I compiled the thread here—the authors pointed out a handful of departments that modeled “responsible” use of facial-recognition technology.
Bedoya lauded the Seattle Police Department for banning real-time facial recognition—that’s when algorithms comb through live video feeds for matches—and for consulting with the Washington branch of the American Civil Liberties Union when developing its use policy. In San Diego, police got legislative approval to use the technology, Bedoya said, and San Francisco police stipulated strong accuracy requirements in their contracts with facial-recognition vendors.
But most cities and states have no such limits on how their (often very expensive) systems can be used. And a pattern of opacity means it’s really hard to find out how they are using them, absent an army of researchers and fat stacks of FOIAs. With their landmark report, the authors hope to push more law enforcement agencies toward transparency—and convince state legislatures and Congress to pass laws regulating facial recognition, to make sure it isn’t abused.