VA should establish an AI disclosure process, Republican lawmaker says
Though the Department of Veterans Affairs has already made “admirable” use of artificial intelligence, transparency “has got to be elevated to a top priority,” according to Rep. Matt Rosendale, R-Mont.
The Department of Veterans Affairs should disclose instances where artificial intelligence technologies engage with or make decisions that affect veterans, according to a top House Republican.
During a House Veterans’ Affairs Subcommittee on Technology Modernization hearing on Monday examining the department’s data privacy and AI efforts, Rep. Matt Rosendale, R-Mont. — the panel's chairman — noted that VA is working to use AI “for some admirable purposes” across the department. But he also warned that “using AI to predict clinical outcomes or mental health problems may be powerful, but it presents a host of ethical problems.”
Rosendale noted, for instance, that the department is “using natural language processing to extract signals of suicide risk from clinical progress notes and other medical records.”
While he said “we need to do whatever we can to prevent veteran suicide,” he expressed concern that ”this could lead to a violation of veterans’ rights” depending on the level of accuracy of the underlying software.
Gil Alterovitz — director of the VA’s National Artificial Intelligence Institute — said “there's always a human in the loop that then looks at the results” and that this process “is a way to help them sift through a large amount of text.”
But when asked by Rosendale about whether VA has “a good, consistent disclosure process that is being utilized and being signed off by our veterans” for the use of AI tools, Alterovitz confirmed that the department does not.
“That has got to be elevated to a top priority,” Rosendale said.
Stephania Griffin, director of VA’s Information Access and Privacy Office and Veterans Health Administration privacy officer, also told lawmakers “there’s not a specific notice” when veterans’ data is fed into an AI algorithm, noting that the HIPAA Privacy Rule and other applicable laws and regulations are “technology neutral” when it comes to “the use of personally identifiable information and protected health information.”
“We are required to give notice to our veterans and our patients on how we use their data, how we collect their data, how we share their data,” Griffin said, adding “but again, it's not specific to a technology.”
VA officials noted during the hearing that the department is in the process of establishing an AI governance board to comply with the Office of Management and Budget’s implementation guidance for President Joe Biden’s October 2023 executive order on AI. Griffin said discussions around public disclosure of the department’s use of AI is something that the board “needs to look at more closely.”
Rosendale’s comments echoed similar concerns from some of his colleagues in Congress, who have also pushed for agencies to be more transparent about their public-facing uses of AI.
A bipartisan group of lawmakers in the Senate and House — led by Sen. Gary Peters, D-Mich., and Rep. Clay Higgins, R-La. — introduced legislation last year that would require agencies to disclose when they are using “automated and augmented systems” to interact with the public or to “make critical decisions” that affect individuals’ education, employment, transportation, healthcare or asylum status.