Human operators must be held accountable for AI’s use in conflicts, Air Force secretary says
The Pentagon needs “to find a way to hold people accountable” for what artificial intelligence technologies do in future conflicts, according to Air Force Secretary Frank Kendall.
Humans will ultimately be held responsible for the use or misuse of artificial intelligence technologies during military conflicts, a top Department of Defense official said during a panel discussion at the Reagan National Defense Forum on Saturday.
Air Force Secretary Frank Kendall dismissed the notion “of the rogue robot that goes out there and runs around and shoots everything in sight indiscriminately,” highlighting the fact that AI technologies — particularly those deployed on the battlefields of the future — will be governed by some level of human oversight.
“I care a lot about civil society and the rule of law, including laws of armed conflict,” he said. “Our policies are written around compliance with those laws. You don't enforce laws against machines; you enforce them against people. And I think our challenge is not to somehow limit what we can do with AI, but it's to find a way to hold people accountable for what the AI does.”
Even as the Pentagon continues to experiment with AI, the department has worked to establish safeguards around its use of the technologies. DOD updated its decades-old policy on autonomous weapons in February to clarify, in part, that weapons with AI-enabled capabilities need to follow the department’s AI guidelines.
The Pentagon previously issued a series of ethical AI principles in 2020 governing its use of the technologies, and released a data, analytics and AI adoption strategy in November that positioned quality of data as key to the department’s implementation of the advanced tech.
The goal for now, Kendall said, is to build confidence and trust in the technology and then “get it into field capabilities as quickly as we can.”
“The critical parameter on the battlefield is time,” he added. “And AI will be able to do much more complicated things much more accurately and much faster than human beings can.”
Kendall pointed to two specific mistakes that AI could make “in a lethal area,” including not engaging a target that it should have engaged or engaging civilian targets and U.S. military assets and allies. These possibilities, he said, necessitate more defined rules for holding operators responsible when they do occur.
“We are still going to have to find ways to manage this technology, manage its application and hold human beings accountable for when it doesn't comply with the rules that we already have,” he added. “I think that’s the approach we need to take.”
For the time being, however, the Pentagon’s uses of AI are largely focused on processing large amounts of data for more administrative-oriented tasks.
“There are enormous possibilities here, but it is not anywhere near general human intelligence equivalents,” Kendall said, citing pattern recognition and “deep data analytics to associate things from an intelligence perspective” as AI’s most effective applications.
During a discussion last month, Schuyler Moore — the chief technology officer for U.S. Central Command — cited AI’s uneven performance and said that during military conflicts, officials “will more frequently than not put it to the side or use it in very, very select contexts where we feel very certain of the risks associated.”
But concerns still remain about how these tools will ultimately be used to enhance future warfighting capabilities, and the specific policies that are needed to enforce safeguards.
Rep. Mike Gallagher, R-Wis. — who chairs the House Select Committee on the Chinese Communist Party and was a former co-chair of the Cyberspace Solarium Commission — said “we need to have a plan for whether and how we are going to quickly adopt [AI] across multiple battlefield domains and warfighting capabilities.”
“I'm not sure we've thought through that,” Gallagher added.