Lawmaker set to introduce bill to standardize AI system testing

Sen. John HIckenlooper, D-Colo., shown here at a 2023 committee hearing, is backing legislation to put NIST in a key AI safety role.

Sen. John HIckenlooper, D-Colo., shown here at a 2023 committee hearing, is backing legislation to put NIST in a key AI safety role. Anna Moneymaker/Getty Images

The proposed legislation would focus on standardizing how AI systems are tested for fairness and safety, with the National Institute of Standards and Technology at the forefront.

Sen. John Hickenlooper, D-Colo., will sponsor legislation that would ask federal agencies — spearheaded by the National Institute of Standards and Technology — to draft new voluntary guidance for AI evaluations that will ensure automated systems are being tested accurately and deployed safely.

The “Validation and Evaluation for Trustworthy Artificial Intelligence Act” is designed to help ensure AI systems can be validated for efficacy and trustworthiness. Hickenlooper will introduce the bill when the Senate returns from recess.

“AI is moving faster than any of us thought it would two years ago,” said Hickenlooper in the press release. “But we have to move just as fast to get sensible guardrails in place to develop AI responsibly before it’s too late. Otherwise, AI could bring more harm than good to our lives.”

The first provision would have NIST create consolidated approaches for those developing and deploying AI to monitor risk levels and evaluate a given system throughout its lifecycle. Part of this process would identify what standards internal auditors — those that work within the organization leveraging the AI system — and external auditors — third parties that evaluate AI systems independently — must adhere to in order to ensure an AI operating system is working safely and as intended. 

To develop testing guidance, NIST would have to collaborate with the National Science Foundation and the Department of Energy, with a focus on safeguarding consumer privacy; mitigating system harms; ensuring dataset quality; disclosing system updates to external parties; and maintaining system governance controls. 

The second key provision in the bill would require the Department of Commerce to create an “Artificial Intelligence Assurance Qualifications Advisory Committee” that would help execute the guidance. Setting the standards for how to properly accreditate independent AI auditing organizations would be within that group’s purview, with a mandated report to Congress summarizing the qualifications an AI auditing entity should possess. 

Another task for the group would be deciding how accreditation can be attained by AI auditing entities, such as through licensing and certification regimes. 

Under the bill’s third provision, the Commerce Secretary and NIST would have to launch a study to examine the current market landscape and best methodologies employed by AI system evaluators within 90 days of the debut of the guidelines.The finalized report would have to be submitted to Congress within a year and focus on the best practices employed by these organizations.  

Hickenlooper’s proposed bill builds off of existing federal efforts to help cultivate trust in automated and machine learning technologies during their deployment and bring varying levels of regulatory oversight to the industry. 

According to industry experts, the VET AI Act’s strength lies in its specifying AI system assessment and audit requirements to complement existing self-certification approaches seen in the 2023 AI Research Innovation and Accountability Act, which is co-sponsored by Hickenlooper. 

“The fact that these are voluntary guidelines is crucial; we typically evolve from establishing best practices in companies to developing standards, and then from there to creating regulation,” Divyansh Kaushik, a senior fellow at American Policy Ventures, told Nextgov/FCW. “This bill aims to be a catalyst in that process, enhancing the development of best practices and standards.”

NIST has been increasingly tasked with developing and disseminating best practices for ensuring AI and machine learning systems are deployed responsibly. Testing AI systems is a component in the AI Risk Management Framework NIST unveiled in January 2023, and Commerce’s May 2024 released a Strategic Vision for NIST’s AI Safety Institute that also focused on A/B testing and red teaming AI software.

Kaushik added that the VET AI Act will likely expand NIST’s role in AI assurance procedures.

“The industry should view this as a proactive step towards building trust and accountability in AI systems,” he said.

As one of the collaborating agencies, the NSF focuses on both technical innovation and responsible use within AI systems. Enhanced cooperation in these arenas is key to developing AI-driven solutions that benefit all of society, Sethuraman Panchanathan, the NSF director said.

“This is a whole-of-nation moment,” Panchanathan said in a statement to Nextgov/FCW. “We need participation, collaboration, and support from all sectors.”

Other legislation introduced in November 2023 asks for NIST and the Office of Management and Budget to develop more testing and evaluation capabilities for AI technology acquisitions within the federal government. The bill has yet to advance through Congressional committees.