Creating responsible AI presents 'major technical challenges,' NIST official says
Experts discussed the regulations needed for fast-developing AI software, with NIST’s Elham Tabassi emphasizing the need for proper data and measurement strategies for different systems.
Advanced artificial intelligence software capable of performing complex tasks and engaging in intelligent dialogue with human users have dominated recent headlines and exposed the lack of federal guardrails around the advancing technologies.
Short of comprehensive federal legislation, public and private entities have looked toward agencies like the National Institute of Standards and Technology––which recently released a new AI Risk Management Framework––for guidance on how to safely and effectively design and operate artificial intelligence.
Speaking with a panel of experts discussing the future of governance in the AI technology sector, Elham Tabassi, the chief of staff at NIST’s Information Technology Laboratory, said that determining a system’s trustworthiness hinges on using the right metrics.
“If you really want to improve the trustworthiness of AI systems, any approach for risk management, any approach for understanding the trustworthiness, should also provide metrology for how to measure trustworthiness,” she said on Monday.
Tabassi explained that AI systems are all about context, and how they work will change given the data they are analyzing. Assessments that analyze the risk in AI software should be tailored toward specific use cases and employ the proper metrics and test data to assess functionality.
“When it comes [to] measuring technology, from the viewpoint of ‘Is it working for everybody?’ ‘Is AI systems benefiting all people in [an] equitable, responsible, fair way?’, there [are] major technical challenges,” she said.
In both the AI RMF and separate Playbook released by NIST, these questions constitute the recommended socio-technical approach to building responsible AI systems. Developers and software designers helping an AI system’s creation should keep this approach in mind to prevent AI from being used in ways other than intended, according to the framework.
Tabassi added that marrying this approach with appropriate evaluation and measurement methods are “extremely important” to risk management when beginning to deploy a new system.
“It's important when this type of testing is being done, that the impact of community are identified so that the magnitude of the impact can also be measured,” she said. “It cannot be over emphasized, the importance of doing the right verification and validation before putting these types of products out. When they are out, they are out with all of their risks there.”
This advice comes as many private sector entities have rolled out AI-enabled capabilities for public experimentation, such as the recent deployment of Microsoft’s AI chat service combined with the Bing search engine. The software is only available to a limited number of users as software engineers work to scale and improve the current version.
Reports of strange and inappropriate communication from generative AI systems, namely from the Bing service enabled by OpenAI’s ChatGPT, have prompted lawmakers to introduce new regulatory legislation. President Joe Biden responded to the growing calls for better oversight into the development and usage of AI software with its AI Bill of Rights in October 2022, but, like the NIST frameworks, the document is not a legal mandate.
Tabassi said she doesn’t think that this lack of formal regulatory laws in the U.S. will prevent improved AI use and development. Rather, she prioritizes the government spearheading and collaborating on international standards.
“My personal belief is that, regardless of the regulatory and policy landscape, having good, solid, scientifically-valid, technically-solid, international standards that talk about risks, that talk about risk management, that talk about trustworthiness, can be a good backbone and a common ground for all of these regulations and policy discussions,” she said.