Pentagon initiative shows AI helps leaders decide in a ‘data-driven way,’ official says
The deputy chief of the Pentagon’s AI office said the adoption of new tools and technologies is helping the department’s combatant commanders “access information and make better decisions.”
The adoption of artificial intelligence technologies is already helping key military personnel streamline their operations, but officials are working to ensure that data quality and ethical uses guide the Pentagon’s implementation of emerging systems moving forward, a senior leader with the Department of Defense’s Chief Digital and Artificial Intelligence Office said at the Intel Public Sector Summit on Tuesday.
Margie Palmieri, CDAO’s deputy chief digital and AI officer, pointed to DOD’s AI and data acceleration initiative — or ADA — as a ready example of how automation is already helping to propel data-driven decisions across the agency, and as a potential approach for continuing adoption moving forward.
The initiative embeds data and analytic subject matter experts within each of the department’s 11 combatant commands to speed up the implementation of emerging technologies. Since its creation in 2021, Palmieri said combatant commanders have been able “to access information and make better decisions from a data-driven way.”
“We see digitization of their processes,” she added. “So things that were manual before, where you had to read one system and type in another — we called them ‘swivel chair,’ because you’re back and forth — have been digitized.”
Palmieri said the Pentagon is “seeing more and more ideas blossom” as a result of ADA, adding that “I think the more we do in this area, the more the mission leaders are seeing benefit from it.”
DOD agencies and components are “asking for AI in a variety of areas,” Palmieri said, noting that CDAO likes to refer to its potential capabilities “from the boardroom to the battlefield.”
Although the Pentagon is already using some AI tools to bolster its approach to equipment maintenance and sort sensor data from warzones, Palmieri said “when people ask for AI capabilities, we actually find that just basic analytics can bring a huge value to their department.”
But the adoption of more analytics-based approaches across DOD — as well as the implementation of more high-tech AI systems — relies on better access to quality data.
“There are over 4,000 data systems in the Department of Defense, and just making sure that we understand what information we have, that we have access to it, that we understand what's in these databases and how to really interact with them — that we can combine them with other sources of information and actually bring kind of a cross discipline look on a given issue — all of that is really boring and tedious, but essential work that we have to invest in,” Palmieri said.
She said low-quality datasets added to AI algorithms will result in the wrong output, regardless of the underlying technology’s capabilities.
To address these concerns — as well as related worries about AI’s role in military decision-making — Palmieri said DOD is prioritizing the responsible and ethical adoption of these technologies, including by “incorporating experimentation and user feedback as part of that test and evaluation process.”
This work includes examining how new tools could benefit the Pentagon’s mission, such as the launch of a new initiative in August — known as Task Force Lima — that is focused on studying the implementation of generative AI. Palmieri said that effort has “collected over 180 use cases across DOD” where the use of generative AI technologies could be beneficial.
“With AI, the users have to constantly be involved,” she said, adding that the department’s “most successful AI projects” are where users have the opportunity to identify the right and wrong answers and “go back and tweak the model” when necessary.
The Pentagon’s AI adoption strategy, released on Nov. 2, placed “quality of data” as the foundation of the agency’s “AI hierarchy of needs.” The strategy’s release came after President Joe Biden issued an executive order on AI at the end of October, which emphasized the trustworthy adoption of AI across federal agencies.
Palmieri highlighted DOD’s careful and deliberate approach to AI, including adopting a set of ethical principles for the use of the technology in 2020 and an implementation plan last year “for how we're going to take those principles and actually bring it into our work with DOD.”
She also noted that DOD released a responsible AI toolkit “a couple months ago” that it shared across the department and with international partners, and said the agency plans to make it public in the coming weeks.