OpenAI, Anthropic to collab with NIST on AI safety testing
The standards agency's AI Safety Institute obtained access to "major models" from the two AI leaders ahead of their public release for testing purposes.
The U.S. government will get an advance look at new artificial intelligence models from industry leaders OpenAI and Anthropic as part of a new safety testing collaboration announced on Thursday by the National Institute of Standards and Technology.
NIST's AI Safety Institute, established under the Biden administration's AI executive order, will get to preview "major new models" from the two companies in advance of their public release and will have ongoing access to the models. The institute intends to share feedback with the two companies on safety improvements and plans to work with counterparts at the United Kingdom's AI Safety Institute on recommendations, according to a news release.
“Safety is essential to fueling breakthrough technological innovation. With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety,” institute director Elizabeth Kelly said in a statement. “These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI.”
The AI Safety Institute launched in February 2024 and is charged with developing testing methodologies and testbeds for research on large language models. In addition, the institute is intended to explore options for detecting and identifying AI generated content and come up with ways to operationalize those activities for federal government use.
The agency isn't sharing the memorandums of understanding that lay out the arrangements for sharing the models for testing purposes because of "commercial sensitivities," a NIST spokesperson told Nextgov/FCW.
"We are happy to have reached an agreement with the US AI Safety Institute for pre-release testing of our future models," OpenAI CEO Sam Altman said on social network X. "For many reasons, we think it's important that this happens at a national level. US needs to continue to lead!"
The arrangement with OpenAI and Anthropic constitutes just one safety testing front for NIST. The agency is also supporting an AI red-teaming exercise being held by Humane Intelligence in October. The company is looking for AI researchers, cybersecurity experts, data scientists and more to participate in ways to hack and stress test generative AI models submitted by participating companies.
NEXT STORY: NSF launches AI-driven biotech research program