NIST plans new risk evaluation methods and standards tracking for AI
The associate director for emerging technology at the National Institute of Standards and Technology said the evaluations are designed to identify potential harms from AI systems before they are deployed.
The National Institute of Standards and Technology is adding new features in its roster of tools to help standardize artificial intelligence technologies in the agency’s ongoing effort to promote the construction of responsible AI systems.
Elham Tabassi, the associate director for emerging technology at NIST, testified before the House Science, Space and Technology Committee on Wednesday, saying that her office is working to build upon its existing AI guidance offerings with new evaluation methods and a standards tracking system.
These new evaluation efforts will primarily look at the socio-technical aspects of AI systems to determine whether they are safe enough for deployment.
“In particular, the evaluations have the goal of identifying risks and harms of AI systems before they are deployed, and to establish metrics and evaluation infrastructure that will allow AI developers and deployers to detect the extent to which AI systems exhibit negative impacts or harms,” Tabassi said.
The forthcoming evaluations support NIST’s goals to prioritize measuring the societal robustness of AI systems, not just the technical aspects.
Other updates NIST plans to make include installing a standards tracker as part of the existing AI Resource Center. While Tabassi did not go into detail about what data the tracker would collect to monitor standards developments, she said that it will contribute to NIST’s existing matrix the agency is crafting to provide a shared, common foundation for AI systems absent large regulation.
“All of these things are trying to help the community with operationalization of the [AI Risk Management Framework], particularly small, medium sized businesses that need more help and support in operationalization of the AI RMF,” she said.