DOD experimenting with generative AI, even as ‘potential downsides’ give officials pause
The deputy chief of the agency’s AI office said the department is working with industry to address concerns about the emerging tech.
The Pentagon is testing out generative artificial intelligence capabilities to gain a better understanding of how they work and to develop metrics around their use, but remains concerned about AI models generating incorrect responses, a senior leader with the Department of Defense’s Chief Digital and Artificial Intelligence Office — or CDAO — said during an event hosted by the RAND Corporation on Tuesday.
Margie Palmieri, deputy chief digital and artificial intelligence officer, said the Pentagon's approach to using generative AI “is use case-based,” with the department continuing to use computer vision, natural language processing and other types of machine learning algorithms to accomplish critical tasks. But she added that “we are experimenting with different generative AI models.”
As part of its work helping to implement the Pentagon’s Joint All-Domain Command and Control — or JADC2 — strategy, CDAO is holding a series of experiments to bolster the department’s ongoing effort to streamline communication between military assets across air, land, sea and cyberspace. Palmieri said these experiments — known as Global Information Dominance Experiments, or GIDE — have included the use of generative AI, with the most recent iteration of GIDE using “about five different models.”
Palmieri said these generative AI tools were used in GIDE “really just to test out” how they work, including if they can be trained on DOD data, how users interact with them and the metrics needed to evaluate their use — a concern, she noted, since “there aren't really great evaluation metrics for generative AI yet, and everything that DOD fields in the technology space has some sort of understanding of how well it works.”
“It's going to change the way we interact with our machines without a doubt, but it's also going to follow along how we test and train and deploy as well,” she added.
Even with generative AI’s promise, Palmieri warned that “what we found is there's not enough attention being paid to the potential downsides of generative AI — specifically hallucination.” Generative AI hallucinations occur when a particular model is asked to supply a response, and it then fabricates entirely incorrect or made-up answers, sometimes due to insufficient data or a misunderstanding of the question.
“This is a huge problem for DOD,” she added. “And it really matters for us and how we apply that. And so we're looking to work more closely with industry on these types of downsides, and not just hand-wave them away.”
Tuesday’s discussion came after the Pentagon confirmed that Greg Little — deputy CDAO for enterprise platforms and business optimization — would be leaving the office at the end of July for a position with Palantir.