GSA testing finds variations in the accuracy of digital ID verification tech
The preliminary results of a study that tested five different vendor solutions found some to be relatively accurate, while others were inaccurate or inequitable.
Preliminary results from the General Services Administration’s study on digital identity verification technologies have been published, and they show that the performance of different solutions varies widely.
The study is focused on remote identity proofing technologies meant to ensure that someone is who they say they are online, a step that is sometimes required to access government benefits. These technologies proliferated in unemployment insurance during and after the pandemic due to concerns about fraud, for example, prompting the Labor Department’s watchdog to issue a warning about potential discrimination.
GSA is working with the Center for Identification Technology Research at Clarkson University to field this study on remote ID checks that require users to take a selfie to be compared against a photo they submit of their government-issued ID.
Researchers found that while two of the commercial software offerings it tested were equitable across all demographics, others either were inequitable or simply didn’t work well. One had a false rejection rate — where a real person with their actual ID is rejected — of over 50%.
GSA runs a government identity proofing and single sign-on solution, Login.gov, and has for years been saying that these tests will help it ensure that whatever technology it uses is equitable for its 100 million-plus users.
The final, peer-reviewed results and report are expected in 2025, a GSA spokesperson told Nextgov/FCW. That report will have a statistical analysis of performance at every step of the process. The latest results are preliminary and have been submitted for that academic peer review.
These results dig into testing of five commercial solutions for equity across differences of age, gender, race and ethnicity and skin tone.
None of the vendors are named in the study, but a 2023 GSA privacy impact assessment for the study lists TransUnion, Socure, Jumio, LexisNexis, Incode and red violet as vendors it is using “to collect and analyze the needed data for the equity study.”
About 4,000 people participated in the study by providing their demographic and personal information, as well as a photo of their ID and a selfie.
Although two of the tested vendors had solutions deemed equitable, another had higher false rejection rates for both Black participants and those with darker skin tones.
Another performed better with Asian Americans and Pacific Islanders as opposed to other demographics.
And another solution had an overall false negative rate of about 50%, meaning that only about half of the legitimate users got through. The best performance among the solutions for false negatives was about 10%.
All this comes as the agency continues to work in the shadow of a 2023 watchdog report that found that GSA officials had misled other agencies over years about Login.gov, particularly whether it met a set of standards set by the National Institute of Standards and Technology for digital identity proofing.
Although GSA had said it wouldn’t add facial recognition to Login.gov because of concerns about equity, GSA has been adding face matching tech to meet those NIST standards.
A GSA spokesperson told Nextgov/FCW that “Login.gov is currently using a vendor with an algorithm that was one of the highest performers in [NIST testing] study, and looks forward to continuing to evaluate research, such as the final equity study results when they are complete, to assess all aspects of its performance and inform future efforts.”
Facial recognition can be controversial, in part due to concerns about bias being baked into the technology.
The U.S. Commission on Civil Rights released a report just last week noting the lack of laws or regulations on the use of facial recognition by the federal government or any standardized policy for its use among agencies. It also pointed to how widely different algorithms can perform.
"Unregulated use of facial recognition technology poses significant risks to civil rights, especially for marginalized groups," said Rochelle Garza, chair of the U.S. Commission on Civil Rights, in a statement about that report. "We must ensure that facial recognition technology is rigorously tested for fairness, and that any detected disparities across demographic groups are promptly addressed or suspend its use until the disparity has been addressed."
There are also basic, unanswered questions about how well this type of selfie plus ID check technology works, something the Department of Homeland Security’s research arm has been testing.
“This study confirms that it is necessary to evaluate products across demographic groups to fully understand the performance of remote identity verification technologies,” the report states.