NASA Still Lacks a Unified Definition of AI, Watchdog Finds
The agency has made progress on artificial intelligence management, but still has work to do to meet governmentwide requirements.
NASA’s Inspector General found that while the agency has made progress in its artificial intelligence management, more work still needs to be done, according to a report released on Wednesday.
NASA has used AI across a wide variety of agency programs, such as storm prediction tools, the Mars Perseverance rover and elements of the Artemis missions, among other things. As a result of the large use cases for AI, the OIG noted that its regulation and management for cybersecurity risks and threats is critical. The watchdog looked at NASA’s progress to develop AI governance and standards to help assess its cybersecurity controls for AI data and technology. The OIG found that, despite some effort to manage the agency’s AI, NASA fell short by not having a single definition of AI. The report also noted that the agency’s AI classification and tracking are insufficient to fully address current and future federal requirements and AI cybersecurity concerns.
Specifically, the shortcomings could impact NASA’s ability to manage its AI and adhere to several executive orders. Moreover, the lack of a central, standardized process could put the agency at an increased risk for cybersecurity threats.
The watchdog noted that NASA has made efforts to improve its AI management. For example, the agency established the NASA Framework for Ethical Use of Artificial Intelligence in April 2021, pulling from principles of leading AI organizations to guide ethical AI decisions, initial guidance for the agency, AI advice and questions for AI practitioners’ consideration.
In September 2022, NASA also developed the NASA Responsible AI Plan, which identified NASA’s Responsible AI officials and detailed how the agency would implement requirements of a 2020 executive order on trustworthy AI. Specifically, this includes capturing and reporting use case inventories, creating oversight of AI projects to ensure continuous monitoring efforts and engaging the AI community on NASA’s ethical AI standards and implementation.
However, the OIG found that, despite these planning documents, NASA has not adopted a standard AI definition. The agency is using three definitions in different overarching documents: the NASA Framework for the Ethical Use of Artificial Intelligence, NASA’s Responsible AI Plan—which uses the executive order definition—and NASA’s internal AI machine learning sharepoint collaboration website.
“While all three definitions are similar, subtleties and nuances in each can alter whether a particular technology is properly considered AI,” the report stated.
The OIG added that NASA personnel reported AI based on their own understanding of it as opposed to relying on these definitions.
This lack of a standard definition means NASA does not have a way to “accurately classify and track AI or to identify AI expenditures within the agency’s financial system, making it difficult for the agency to meet federal requirements to monitor its use of AI,” according to the report.
The OIG also found that NASA’s AI is often managed as part of a larger project instead of on its own, which means it is not separately tracked. This has affected the agency’s response to the 2020 trustworthy AI executive order—which called for agencies to create an AI inventory—as well as its response to a 2019 executive order on maintaining U.S. AI leadership—which called for the gathering of an estimated annual budget for AI expenditures. In order to create the inventory and budget, NASA utilizes a multi-faceted data call to collect individual responses from AI users—something the OIG noted “takes significant time to compile, validate and vet, and runs the risk of clerical errors that could be significantly lessened using an automated process.”
Furthermore, while NASA officials believe the agency’s processes—such as monitoring requirements and making sure it safeguards AI systems from cyber threats—should be sufficient to address AI security concerns, previous OIG audits have revealed that NASA’s fragmented IT management puts it at an increased risk from cyber threats. According to the OIG, NASA also faces more challenges to implement potential future federal AI cybersecurity controls because of the lack of an AI-specific mechanism or way to appropriately categorize and classify AI within its record systems.
The OIG recommended NASA establish a standard definition for AI that “harmonizes” the three existing definitions; make sure the standard definition is used to identify, update and maintain NASA’s AI use case inventory; identify a classification system to help quickly apply federal cybersecurity control and monitoring practice requirements; and create a way to track budgets and expenditures for AI use case inventory.
NASA agreed or partially agreed with the watchdog’s recommendations and outlined how it would address them.
NEXT STORY: New AI Research Funding to Focus on 6 Areas