DOD Science Board Recommends 'Immediate Action' to Counter Enemy AI
Pentagon scientists worry the U.S. could be on the losing side of a AI arms race.
The Defense Science Board’s much-anticipated “Autonomy” study sees promise and peril in the years ahead. The good news: autonomy, artificial intelligence and machine learning could revolutionize the way the military spies on enemies, defends its troops, or speeds its supplies to the front lines.
The bad news: AI in commercial and academic settings is moving faster than the military can keep up. Among the most startling recommendations in the study: the United States should take “immediate action” to figure out how to defeat new AI-enabled operations.
In issuing this warning, the study harks back to military missteps in cyber and electronic warfare. While the Pentagon was busy developing offensive weapons, techniques, plans and tricks to use against enemies, it ignored U.S. equipment’s own vulnerabilities.
“For years, it has been clear that certain countries could, and most likely would, develop the technology and expertise to use cyber and electronic warfare against U.S. forces,” the study’s authors wrote. “Yet, most of the U.S. effort focused on developing offensive cyber capabilities without commensurate attention to hardening U.S. systems against attacks from others. Unfortunately, in both domains, that neglect has resulted in DOD spending large sums of money today to ‘patch’ systems against potential attacks.”
That cycle could repeat itself in the field of AI, the study says.
To counter the threat, the study says, the undersecretary of defense for intelligence should “raise the priority of collection and analysis of foreign autonomous systems.” Take that to mean figuring out what China, Russia and others can do and will soon be able to do with artificial intelligence.
Meanwhile, the Pentagon’s office of acquisition technology and logistics should gather together a community of researchers to run tests and scenarios to discover “counter-autonomy technologies, surrogates, and solutions”—in other words, practice fighting enemy AI systems. This community should have wide discretion in conducting research into commercial drones, software, and machine learning.
“Such a community would not only explore new uses for autonomy, counterautonomy and countering potential adversary autonomy, but also more realistically inform what the tactical advantages and vulnerabilities would be to both the U.S. and adversaries in adopting or adapting commercially available technology,” the study says.
Just as overreliance on information technology has led to new weaknesses, so autonomy, too, is not a silver bullet. The study names a handful of “opportunities to limit or defeat the use of autonomy against U.S. forces.”
They include “using deception to confound rules-based logic” or simply overwhelming the AI’s sensor inputs. In most settings, the human brain can differentiate signal from noise far more capably than any human-written program.
The study reiterates the importance of human-decision making, but offers that the greatest potential for autonomy is in software that learns or adapts on its own, with little to no human guidance. When, if ever, is it safe to put an autonomous learning system like that in charge of a howitzer? The study says the Defense Department doesn’t yet have the means to even ask the question.
“Current testing methods and processes are inadequate for testing software that learns and adapts,” it reads. Better testing procedures, particularly in virtual environments, will be key to getting the most out of next-generation artificial intelligence.
The United States faces a special ethical burden in how it develops and uses autonomy. The military faces pressure—both internally and from outside groups—to limit the use of autonomy in weapons. That’s less true in China and Russia; the latter of which boasts it has tested lethal autonomous ground robots as guards for missile sites and is developing a crewless version of the Armata T-14 tank.
“While many policy and political issues surround U.S. use of autonomy, it is certainly likely that many potential adversaries will have less restrictive policies and [concepts of operation] governing their own use of autonomy, particularly in the employment of lethal autonomy. Thus, expecting a mirror image of U.S. employment of autonomy will not fully capture the adversary potential,” notes the study.