VA accounts for majority of all agencies’ safety- and rights-impacting AI
Of the 1,757 total AI use cases reported by federal agencies last year, 227 of them were listed as safety- or rights-impacting. VA accounted for 145 of those identified use cases.
The Department of Veterans Affairs has drastically expanded its adoption of artificial intelligence capabilities over the past year but is also exploring the use of more safety- and rights-impacting capabilities than any other federal agency.
Since 2020, federal mandates have increasingly required most agencies — with limited exceptions for national security — to compile and disclose their AI use cases, which has shed new light on how VA and other agencies are approaching the use of emerging technologies.
A December 2020 executive order from then-President Donald Trump first called for agencies to publicly release their AI inventories, with the reporting guidelines fleshed out by an AI-focused executive order from President Joe Biden in October 2023. Biden’s mandate was further bolstered by a March 2024 memo released by the Office of Management and Budget, which also called for agencies to report use cases identified as safety- and rights-impacting.
In adherence with the guidance, covered federal agencies were required to submit their AI use case inventories to OMB by Dec. 16.
In total, 37 agencies reported a combined 1,757 AI use cases in 2024, according to OMB’s consolidated inventory. 227 of these overall use cases were identified as safety- or rights-impacting, with VA accounting for 145 of those identified tools — roughly 64%.
By comparison, the agency with the second-most use cases identified as safety- and rights-impacting in 2024 was the Department of Homeland Security, which reported 34 such instances out of its total of 183 documented AI capabilities.
OMB defined rights-impacting AI as a tool whose output affects a decision or action for a specific individual or entity and has a “legal, material, binding or similarly significant effect” on their civil rights, civil liberties, privacy and equitable access to government services or other areas where civil rights and equal opportunity protections apply.
Safety-impacting AI refers to technologies whose output could potentially impact the safety of a person’s or entity’s wellbeing, environment, assets or critical infrastructure.
VA, for its part, reported 227 total AI use cases in its 2024 inventory, with two additional use cases from the VA Office of the Inspector General added to OMB’s compiled total for the department. Neither one of the OIG’s use cases was determined to be safety- and rights-impacting.
VA’s 2024 AI inventory included 100 more total use cases than it reported in 2023, although it was not required at the time to identify whether any of them were safety- or rights-impacting.
In a LinkedIn post last month announcing the AI inventory’s release, VA Chief Artificial Intelligence Officer and Chief Technology Officer Charles Worthington said the department’s newly reported use cases represent “some of the most exciting innovation happening in government today.”
Worthington noted, in part, that the department is already piloting a GenAI chatbot that is approved to use VA data to help “assist with basic administrative tasks.” He said that the response has been overwhelmingly positive, with over 72% of users reporting that they agree or strongly agree “that the tool has made them more efficient.”
The department has also accelerated its onboarding of AI capabilities over the past year, with 130 use cases listed as being in the “operation and maintenance” stage. During a House Veterans’ Affairs Subcommittee on Health hearing last January, Worthington reported at the time that 40 of the department’s use cases were “in an operational phase.”
A VA spokesperson told Nextgov/FCW that the department “actively monitors and manages the safety and rights impacts of VA AI use cases” by adhering to Biden’s executive order and OMB’s guidance, and that “all of VA’s AI systems must complete privacy and security reviews, just like other non-AI VA technology.”
VA’s mission to provide healthcare and benefits to millions of veterans, coupled with its focus on using AI tools to enhance medical care, also has a large impact on the types of advanced capabilities that it is working to study and deploy.
The vast majority of VA’s reported safety- and rights-impacting use cases were located within the Veterans Health Administration and were identified as serving health and medical purposes. VHA is the nation’s largest integrated healthcare network. VA has also launched a National AI Institute — housed within VHA — that is focused on researching and implementing emerging technologies to improve veteran care.
“A large proportion of VA’s AI use cases are FDA-cleared AI-enabled devices, which have undergone the same rigorous FDA testing as devices used in non-VA health care settings,” the spokesperson further said.
It is worth noting that the Department of Health and Human Services reported 271 AI use cases in 2024 — the most of any federal agency — but that only four of those were identified as safety- or rights-impacting.
The VA spokesperson added that the department is “employing inventorying and risk management practices for AI” to limit the impact these tools could potentially have on veterans and personnel, including through “independent reviews, human oversight and use case monitoring.”
VA’s compliance plan for OMB’s guidance, which was issued in September 2024, also outlined the department’s expanded approach to ensuring its responsible use of AI.
Beyond adopting OMB’s definitions for safety- and rights-impacting use cases, the plan said VA “elaborated upon these definitions” in an internal document that “identified a representative set of potential AI use cases across VA and jointly determined whether VA identifies them as safety-impacting and rights-impacting, providing a written rationale for each use case.”
“As new regulations emerge and additional use cases are identified, VA will iterate and refine the document accordingly,” the department’s plan further noted.