Watchdog: FBI Facial Recognition May Not Be Accurate
A new GAO report examined the Interstate Photo System, with more than 30 million photos, and the FBI’s face recognition capability.
The FBI doesn’t know exactly how accurate its facial recognition technology is, new watchdog report finds.
The bureau's Next Generation Identification-Interstate Photo System, a database including more than 30 million photos of criminals, lets law enforcement match a surveillance camera photo to that of a known criminal by narrowing their identity to between two and 50 possible candidates.
But the FBI hasn't ensured its facial recognition technology doesn’t “unnecessarily include photos of innocent people as investigative leads,” according to a new report from the Government Accountability Office. The NGI-IPS and the FBI’s "Facial Analysis, Comparison and Evaluation Services," or FACE, which accesses databases from other federal, state and local groups, were the subject of a recent audit.
Most photos in NGI-IPS are submitted from 18,000 external groups among federal, state and local law enforcement -- about 70 percent are criminal mugshots. It’s the same technology that helped the FBI and a state track down a sex offender who had been on the run for 20 years. The FBI has spent about $55 million on facial recognition over the last six years.
FBI officials haven’t tested the detection rate -- how often a match is generated against a submitted photo -- for lists less than 50 candidates, according to GAO. Law enforcement may request a specific number of candidates for any search, though the default is 20.
Verifying that NGI-IPS is accurate for all candidate list sizes would provide more assurance that the system helps to “enhance, rather than hinder, criminal investigations,” the GAO report said.
The FBI also hasn’t assessed how often errors occur in facial matching. These can be caused both by lower-quality technology, but also by low-quality photos, the report said.
The detection- and the false-positive rate are key data points that will help the bureau and the public understand these risks before the technology is deployed, the report said. GAO also found FBI hadn’t determined whether the facial recognition technology its federal, state and local partners use is accurate enough to support its own investigations.
These oversights could impinge on citizen's privacy and civil liberties, the report noted. In 2012, the advocacy group the Electronic Frontier Foundation suggested facial recognition systems could allow “covert, remote, and mass capture and identification of images.”
In criminal cases, a false positive might force a defendant to prove he or she isn’t who the facial recognition system thought he was -- such a scenario might “alter the traditional presumption of innocence,” an EFF statement said.
The FBI has also been slow in publishing its privacy protocol, the report found. The Justice Department hadn’t updated a key “Privacy Impact Assessment” between 2008 and 2015; and though NGI-IPS has existed since 2011, the FBI also didn’t publish the requisite System of Records Notice, explaining how the technology is used, until May 2016.
Publishing these notices more promptly would reassure the public “the FBI is evaluating risks to privacy,” the report said.
The GAO review comes shortly after DOJ published a notice arguing its massive biometric database should be excluded from the Privacy Act, which requires the federal government to disclose, upon inquiry from the subject, the information it collects on the public. The system includes finger and palm prints, iris and facial scans, images of tattoos, from criminals, suspects, detainees and anyone undergoing background checks, security clearances and other government assessments.