The Pentagon's Plan for 'Responsible AI'
A 47-page document released this week outlines the Pentagon's plan to incorporate its two-year old ethical AI principles throughout a system's design, development and use.
The Defense Department is rolling out the long-awaited implementation strategy for its responsible artificial intelligence principles.
Deputy Secretary of Defense Kathleen Hicks signed out the Responsible Artificial Intelligence Strategy and Implementation Pathway, which was released June 22.
"It is imperative that we establish a trusted ecosystem that not only enhances our military capabilities but also builds confidence with end-users, warfighters, the American public, and international partners. The Pathway affirms the Department's commitment to acting as a responsible AI-enabled organization," Hicks said in a news release announcing the new pathway.
The 47-page document outlines the Pentagon's plan to incorporate its two-year old ethical AI principles throughout a system's design, development, and use. Each of the six tenets – governance, warfighter trust, product and acquisition, requirements validation, the responsible AI ecosystem, and workforce – includes lines of effort, goals, and estimated timelines.
Responsible AI leads for DOD components are expected to report "exemplary" AI use cases, best practices, failure modes, and risk mitigation strategies to the chief digital and artificial intelligence officer (CDAO) within a year. Additionally, within six months of the plan's approval, those leads would also need to report "any perceived significant barriers" to fulfilling the responsible AI requirements, including those related to infrastructure, hardware and software.
Diane Staheli will lead the implementation effort as the responsible artificial intelligence chief for the CDAO, which replaces the Joint AI Center. Dr. Jane Pinelis will provide executive-level guidance as DOD's chief of AI assurance directorate, which oversees the CDAO's testing and evaluation and responsible AI divisions, the document states.
The strategy, which comes after the Defense Innovation Unit published responsible AI guidelines for commercial companies, also stresses the Defense Department's desire to foster trust among military leaders and service members and AI capabilities.
"The department's desired end state for RAI is trust," the document states. "Without trust, warfighters and leaders will not employ AI effectively and the American people will not support the continued use and adoption of such technology."