NIST delivers draft AI guidance, generative AI pilot program
Pursuant to the White House's sweeping October AI executive order, the National Institute of Standards and Technology unveiled four draft documents and a pilot program to guide AI innovation and testing.
The National Institute of Standards and Technology released a slew of new draft documents on artificial intelligence guidance and deployment, spanning topics from synthetic content risks to international standards development.
Released on Tuesday, four new documents have been added to NIST’s growing portfolio of AI-centric guidance. They come as multiple agencies, including NIST itself, are finalizing their mandates stipulated within President Joe Biden’s 2023 executive order on AI as the 180 day mark on that order passes.
“For all its potentially transformative benefits, generative AI also brings risks that are significantly different from those we see with traditional software,” said Under Secretary of Commerce for Standards and Technology and NIST Director Laurie Locascio in a press release. “These guidance documents will not only inform software creators about these unique risks, but also help them develop ways to mitigate the risks while supporting innovation.”
NIST’s four new documents are: the AI RMF Generative AI Profile, Secure Software Development Practices for Generative AI and Dual-Use Foundation Models, Reducing Risks Posed by Synthetic Content, and A Plan for Global Engagement on AI Standards.
The first, the AI RMF Generative AI Profile works to guide organizations to identify risks generative AI softwares can pose in their digital networks, and helps create a set of actions relatively tailored to individual organizations’ needs. It is intended to be a companion to the preexisting AI RMF released in early 2023. A diverse group of risks are included in the AI RMF Generative AI Profile, including chemical, biological and nuclear weapons, as well as hacking, malware and phishing threats.
The Secure Software Development Practices for Generative AI and Dual-Use Foundation Models publication focuses on secure AI and automated software development. The document also addresses concerns around malicious training data that can negatively impact the outputs of given AI algorithms. It offers guidance on handling training data and securely collecting such data to ensure AI systems work safely and in an unbiased manner.
NIST’s draft plan on Reducing Risks Posed by Synthetic Content comes ahead of the November 2024 elections, where officials say synthetic content created by AI stands to be a notable threat. The draft document includes methods for authenticating, detecting and labeling synthetic content to differentiate fake media, which include digital watermarking and metadata recording.
The Plan for Global Engagement on AI Standards outlines objectives for the U.S. to work with other countries on developing a set of standards to guide the development and implementation of AI systems. Crafting a global consensus of these standards will help ensure the use and innovation of AI systems is transparent and responsible.
NIST’s draft standards document will invite feedback on what should be included in final standardization efforts. These topics could include watermarking, scientific research and development and evaluation metrics.
Beyond the four draft documents, NIST also announced the launch of its pilot NIST GenAI evaluation program to begin bringing new research into the generative AI space, mainly focused on benchmark metrics and techniques to distinguish fake content from genuine content.
NIST GenAI’s evaluations will help inform the work of the agency’s U.S. AI Safety Institute.
“This pilot addresses the research question of how human content differs from synthetic content, and how the evaluation findings can guide users in differentiating between the two,” the homepage reads.
Comment periods on the four draft documents are open, and the final versions are expected to be released later this year.