US Must Be More Aware of 'Adversarial Side' of AI, DHS Official Warns
The Department of Homeland Security’s Science and Technology Directorate aims to better understand AI as it becomes integrated with the nation’s infrastructure.
As emerging technologies like recently-popular artificial intelligence tools become more integrated in daily life, the Department of Homeland Security is turning its attention to the problematic features of these nascent technologies.
Speaking during a GovCon Wire panel on Tuesday, Dimitri Kusnezov, the under secretary for the DHS’s Science and Technology Directorate, discussed how his office is working to mitigate potential harms within AI systems as they become a more ubiquitous part of society.
“The adversarial side of AI… is one aspect of this big area that I think we have to be more domain awareness of, with respect to the broad national security role, domestic Homeland Security role of the agency," Kusnezov said.
Kusnezov referenced documented problems posed by AI technology, like deep fakes and errors within ChatGPT, but added that AI is likely to be utilized in standard workflow operations and data analytics, primarily to help automate certain tasks.
While the private sector is already exploring different ways to deploy AI and machine learning foundational models that learn from large bodies of data, Kusnezov said that there are possible applications in the government, such as the Transportation Security Administration using facial recognition systems at checkpoints.
“I think about the TSA, for example, we take 5 million images a day,” he said. “You think about other border crossings and other places where data streams into DHS, [the] question is, are foundation models going to be impactful?”
He noted, however, that public and private sectors alike should be focusing on the positive and negative aspects to innovation with these technologies.
“I think we have to question what our constructs are, what our risk models are, what is the impact of changes. Are we on the path to be prepared for potential futures? Or should we be thinking a little differently?” Kusnezov posited.
Deeper oversight into mitigating AI and ML systems’ potential pitfalls is particularly crucial as parts of the country begin to incorporate more autonomous systems into critical infrastructure operations. Kusnezov highlighted how more autonomous and integrated digital networks supporting necessities like water and power will demand more sophisticated cybersecurity provisions and general understanding.
“We have to be in that conversation, to evolve with it to understand where it could go,” he said. “These things [smart infrastructure] will be talking to each other, you know, in automated ways through microcode that will be embedded in there that we’ll never be aware of.”
These powerful yet sensitive softwares will be a new target for adversaries to virtually attack. Kusnezov advocated for regulations like standards development to prepare for the advent of this threat, both domestically and internationally.
“We have to be part of that conversation,” he said. “The innovation here is global. And we have to grab people who can help figure this out with us.”
Kusnezov said that part of the S&T’s preparedness for handling emerging technology development will focus on crafting malleable systems and entering into contracts that can adapt to an uncertain future.
“It's not that we're thinking about products for today, we are thinking about that in the context of a nonlinear world, and things changing in ways we don't anticipate,” he said.
NEXT STORY: NTIA Asks Public How To Measure Safe AI Systems