How NIST is helping to guide the government conversation on AI
Industry, technologists, policymakers and members of Congress are watching what the U.S. standards-setting agency is doing to understand the risks of AI.
The National Institute of Standards and Technology may be in the perfect position to offer a blueprint for the uncharted world of artificial intelligence.
The agency, which focuses on voluntary, stakeholder-driven measurement standards, is drilling in on the risks posed by the emerging technology in a series of guidance documents including the AI Risk Management Framework published earlier this year.
The guidance comes as generative AI apps like ChatGPT are having a moment. NIST's experts are looking at risks like algorithmic biases, autonomous capabilities, job displacement and other uncertainties posed by the advent of AI technology.
“We're not a regulatory agency, but our work is being used by a very broad…community, from industry to academia, but also policymaking,” Elham Tabassi, the chief of staff of the Information Technology Laboratory at NIST told Nextgov/FCW. "What we do is work with the community to develop guidance that are clear, that are implementable, that are actionable and they become the basis for good policymaking."
In May, NIST revised its own definition of AI to better include generative systems in the scope of available guidance.
“After the ChatGPT and…other companies put similar products out, we started conversations with experts in the community again,” Tabassi said, but noted that the existing scope of current guidance is applicable to the new technology.
U.S. leadership in international standards processes is a priority of the Biden administration, and leading entrants in the generative AI space are looking to agencies like NIST to offer developers basic rules of the road when it comes to AI safety, accuracy, trustworthiness and privacy.
"The United States can make safety a differentiator for our AI industry just as it was a differentiator for our early aviation and pharmaceutical industries," RAND Corporation CEO Jason Metheny told a congressional panel in June. "Government involvement in safety standards and testing led to safer products, which in turn led to consumer trust and market leadership. Today, government involvement can build consumer trust in AI that strengthens the U.S. position as a market leader."
Alyssa Lefaivre Škopac, acting executive director of the Responsible AI Institute, agreed that standards surrounding emerging AI technologies can potentially outpace slower regulatory efforts.
“Guardrails and regulation are going to be incredibly important to the adoption of AI in a very innovative and safe way,” Lefaivre Škopac told Nextgov/FCW. “The reason that regulation and guardrails are so critical is we won't be able to foster the competitive advantage and also the innovation that harnesses the benefits of AI, unless we put some of this structure in place.”
Political bodies like the European Union have taken steps to instill a level of regulation with its EU AI Act. Lefaivre Škopac said more nations will have to engage in similar regulatory efforts to continue economic cooperation.
“There has been a massive push and demand from democracies around the world to actually have this conversation about how do we regulate quickly enough to keep up with emerging technology and how quickly things are moving, especially around generative AI and foundation models,” she said.
Lynne Parker, former deputy director of the Office of Science and Technology Policy, wants Congress to adopt NIST's efforts for wider U.S. government use.
"Congress should require federal agencies to use the NIST AI risk management framework during the design, development, procurement, use and management of AI," Parker told Senate lawmakers at a May hearing. "Beginning with the standardized assessment of the risks posed by use cases of AI is a key step that can be taken now by all federal agencies without needing to wait for additional OMB guidance."
While NIST's work can form the basis of congressional and executive action, Tabassi stressed that its efforts are voluntary and stakeholder-based.
"What we do is work with the community to develop guidance that are clear, that are implementable, that are actionable and they become the basis for good policymaking," Tabassi said. "The non-regulatory nature of our character is really important for us because that is what allows us to build trust with the community, with the industry, aligned with the technical excellence and other things that we bring in…to be able to do the work that we do.
“Voluntary approaches give us the flexibility to allow innovation to happen but also come back, revise and retune the guidance based on what's happening,” Tabassi said.