NASA is working to onboard AI tools
The space agency is testing a generative AI capability that officials expect to go live this summer with approvals to use sensitive internal data.
NASA is working with private sector firms to acquire new AI capabilities but remains concerned about biased results from the emerging capabilities, agency officials said on Wednesday.
During an employee town hall focused on the space agency’s use of AI technologies, officials stressed their continued focus on the responsible use of AI tools — tenets, they noted, that align with President Joe Biden’s October 2023 executive order outlining how the federal government should safely and securely use the tools.
NASA Administrator Bill Nelson said the agency will not “suddenly go into a whole new bunch of things” because of the development of more advanced AI tools, noting that NASA has long used AI to support its missions.
“We can make our work more efficient, but that's only if we approach these new tools in the right way, with the same pillars that have defined us since the beginning: safety, transparency and reliability,” Nelson said, adding that “we work with partners across industry, across universities, across our government [and] across the world to find ways to improve them.”
AI tools have helped NASA track wildfire smoke, count trees and identify the location of exoplanets. Ongoing engagement with private sector partners, however, has largely focused on the adoption of more powerful tools.
NASA CIO Jeff Seaton said the agency is working with vendors to bring in “publicly available, generally known tools” as well as other capabilities “more towards the mission technical side of things.” This includes ongoing work on technology that is compliant with the Federal Information Security Modernization Act, or FISMA.
“We have a generative AI large language model capability that's being tested out by some of our folks within the agency,” Seaton said. “And we anticipate that by mid to late summer, we'll have that environment rated at a FISMA moderate level so we can start leveraging some sensitive internal data and experimenting.”
Earlier this month, NASA announced that David Salvagnini, the agency’s chief data officer, would also serve as its chief AI officer. Biden’s order required agencies to designate a chief AI officer to oversee and manage risks associated with the emerging technologies.
When it comes to AI bias — the potential for advanced technologies to produce misleading or incomplete assumptions based on the underlying data input into their algorithms — Salvagnini said the agency remains focused on understanding these shortcomings as it works to bring in new tools.
“One of the things we have to be careful about as we onboard various different AI technologies that are coming from our vendor partners is: do we really understand how they work and have we thoroughly tested them for bias?” he said.
As NASA and other agencies work to leverage AI, Salvagnini said officials have to move from wondering if the new technologies are “safe from a cyber perspective” to also considering if “the AI is protected against bias.” He added that federal employees should also become more aware of AI’s potential and downsides as these tools are rolled out.
During Wednesday’s event, Salvagnini said the agency will be announcing a ‘summer of AI’ training initiative shortly “where everyone in NASA is going to have an opportunity to learn more about AI.”
“I would encourage you to participate in those courses and learn about bias and learn about how you can prevent some of the bias that would be unforeseen,” he told the agency’s employees. “Learn about the data that enables an AI to do what it does.”