NIST adds 5 new members to its AI Safety Institute
The new members will focus on AI objectives related to national security, standards development and more.
The U.S. AI Safety Institute’s leadership team got an infusion of expertise on Tuesday, with five new members joining the helm of the AI safety-focused effort housed in the National Institute of Standards and Technology.
Announced by Commerce Secretary Gina Raimondo, leaders joining the AISI include Paul Christiano, a former OpenAI leader and founder of the nonprofit Alignment Research Center, as head of AI safety; Adam Russell, director of the Information Sciences Institute’s AI Division at the University of Southern California, as chief vision officer; Mara Campbell, former deputy chief operating officer at Commerce’s Economic Development Administration, as acting chief operating officer and chief of staff; Rob Reich, professor of political science at Stanford University and associate director of the Institute for Human-Centered AI, as senior advisor; and Mark Latonero, former deputy director of the National AI Initiative Office at the White House Office of Science and Technology Policy as head of international engagement.
The five new members will continue existing work to help execute tasks stipulated in President Joe Biden’s 2023 executive order on AI.
“To safeguard our global leadership on responsible AI and ensure we’re equipped to fulfill our mission to mitigate the risks of AI and harness its benefits, we need the top talent our nation has to offer,” said Raimondo in the press release. “Developing guidelines that will strengthen our safety and security, engaging with civil society and business, and working in lockstep with our allies are fundamental to addressing this generation-defining technology.”
Some of the tasks the new members — and AISI — will focus on include designing and testing AI frontier models, overseeing agency operations, implementing broader agency strategy and cultivating more international cooperation, with a priority around national security objectives and clear AI standards..
“I am very pleased to welcome these talented experts to the U.S. AI Safety Institute leadership team to help establish the measurement science that will support the development of AI that is safe and trustworthy,” said Under Secretary of Commerce for Standards and Technology and NIST Director Laurie E. Locascio. “They each bring unique experiences that will help the institute build a solid foundation for AI safety going into the future.”
NEXT STORY: Five Eyes agencies issue guidance on securing AI