NIST’s new AI safety institute to focus on synthetic content, international outreach
Inaugural U.S. AI Safety Institute Director Elizabeth Kelly said she aims for the new NIST initiative to become the “leading safety institute in the world.”
The National Institute of Standards and Technology’s new U.S. AI Safety Institute aims to lead both domestic and international conversations about standardization for artificial intelligence tools and applications, with a focus on monitoring synthetically generated content, according to the institute’s director, Elizabeth Kelly.
Speaking during a Monday panel discussion at the unveiling of MITRE's new AI Assurance & Discovery Lab, Kelly, who was appointed the head of the USAISI on Feb. 7, discussed the “broader vision” for the institute.
A product of President Joe Biden’s October executive order on AI, the institute was established in February 2024 and is tasked with a slew of deliverables for the evaluation of large language models and conducting advanced research into machine learning and AI. Kelly said that one of those tasks includes developing international engagement strategies with like-minded nations.
“I think we're very excited about positioning it as the leading safety institute in the world,” she said.
Kelly elaborated on three pillars that guide the USAISI’s work: testing and evaluation; safety and security protocols; and developing guidance focused on AI-generated content.
The first pillar will center primarily on creating test beds, while the second pillar will focus on both identifying problems in how some models function and also offering solutions.
“It's not enough to just identify problems with models,” she said. “We have to say, ‘okay, if this is the problem, what is the risk mitigation that actually works?’”
Areas of focus for institute researchers include exploring effective watermarking for artificially generated content, establishing content provenance and origins, detecting which content is inauthentic and operationalizing these tactics across the federal government, with the help of the Office of Management and Budget.
Kelly noted that international collaborations are a major objective on top of the research efforts.
“We at [the] Safety Institute are working very closely with all of our allies and partners — both those countries that have already set up Safety Institutes like Japan and the U.K., as well as those that are thinking about it are in earlier stages — and I think that collaboration needs to really take two different forms,” she said.
The first form of collaboration would ensure the formation of aligned, interoperable guidance for a level playing field for private sector companies, while the second would involve working together to advance the science that is foundational to advancements in AI technologies.
“A lot of the questions that we are together tackling — for example, ‘What should watermarking look like?’ ‘What risk mitigations actually work? Which tests work best?’ — are all areas where there are open questions. And by pooling our resources and working together closely, I think we can make a lot more progress,” she said. “And we're excited for that collaboration.”