Next digital identity testing at DHS to focus on ‘liveness’ detection
The department’s Science and Technology Directorate is taking applications from vendors that want to participate in testing technologies that claim to determine whether a submitted image is legitimate or a hacker’s spoof.
The Department of Homeland Security’s research arm is moving forward with efforts to fill in holes in how well remote identity verification technologies work, announcing Tuesday that it’s taking applications from developers to test their liveness detection technologies, meant to weed out presentation attacks.
The first two sets of testing in the department’s Remote Identity Validation Technology Demonstration — focused on the ability of systems to verify photographed identity documents as real and match photos on those IDs to selfies — are already underway.
Up next are technologies known as liveness tests, meant to check that the selfie submitted is really a photo of a person, not a mask, photo of a photo or other technique to try to get past the check, Arun Vemury, senior engineering advisor for identity technologies at DHS’ Science and Technology Directorate, told Nextgov/FCW.
The directorate is taking applications from developers who want to participate through Feb. 29. The Transportation Security administration, Homeland Security Investigations Forensic Laboratory and National Institute of Standards and Technology are partnering on the testing as well.
The goal of the tests is to examine how well the technologies — which individuals are at times required to use to access online services like bank accounts, unemployment or tax resources, as a way to prevent fraud, which spiked in government services during the pandemic — actually work.
“I think it’s fair to say that these are pretty widely deployed,” said Vemury. “They provide a lot of value and a lot of convenience, but at the end of the day, we don't really have a lot of information about how well they work.”
The use of facial recognition technology for remote identity verification has been fraught with concerns about disparities in how well the technology performs across people of different demographics and the real-world effects of getting it wrong in use cases like criminal justice.
It’s hard to give a straight answer on how well the tech works across demographics, said Vemury, because of the wide range of products. The best technologies make few mistakes, but the lower-performing ones do make errors — both generally as well as mistakes based on demographics, he said, noting that the technology’s processes, safeguards and particular use cases are also influential.
The ongoing testing at DHS is meant to put data to questions about how often real users get incorrectly tagged as fraudsters and how often bad actors get through identity checks, as well as whether the demographics of users impact the answers to those questions, said Vemury. What types of performance measures are best for an evolving threat landscape is also a focus.
“With new technologies — with 3D printing, with generative AI — those costs have come down a lot, and the level of effort, the level of expertise, needed to put [attacks] together are much lower than they used to be. So what used to previously be thought of as a high-end attack, now somebody who has access to generative AI tools may be able to put together very convincing attacks at a much lower price and a much lower skill level as well,” said Vemury.
Given the speed of change, Vemury said that the researchers are considering doing multiple iterations of these tests. He and his team have also gotten lots of interest in the results from both federal agencies and international partners, he said.
“We want to make sure we understand that process versus just relying upon vendor claims,” he said. “We still have some really big questions to answer about, ‘What’s going on with fraud? And how well these technologies really do to help tamp down on fraud? Or what else could be done in this space to further improve their effectiveness?’”