Commerce announces AI safety consortium
The new stakeholder group, based at the National Institute of Standards and Technology, will help drive AI safety standards.
The National Institute of Standards and Technology unveiled the first consortium dedicated to promoting safe design and practices when creating and deploying artificial intelligence systems.
Announced on Thursday, the U.S. AI Safety Institute Consortium will convene leaders working with AI and machine learning technology with the objective of meeting goals established in President Joe Biden’s October 2023 executive order on AI.
Among the initial tasks the consortium will undertake is the development of guidance for topics within AI software and system development, like red-team testing, capability evaluations, risk management and watermarking synthetic content.
“The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence. President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem. That’s precisely what the U.S. AI Safety Institute Consortium is set up to help us do,” said Commerce Secretary Gina Raimondo in a press release.
The group includes experts from the public and private sectors, as well as civil society organizations and academia. The consortium will be housed within the U.S. AI Safety Institute and bring together over 200 entities in its inaugural cohort.
Notable members include representatives from Apple, Adobe, Accenture, the U.S. Bank National Association, Wells Fargo, Kaiser Permanente, the Johns Hopkins University, the Cleveland Clinic, Credo AI, CrowdStrike, the Cyber Risk Institute, Google, IBM and more.
Including a large and diverse group of industries in the process of crafting standards to help evaluate the risk and safety of AI technologies is part of NIST’s larger push to establish industry-agnostic standards by which to create and deploy responsible AI and machine learning technologies.
NIST Director Laurie Locascio underscored the importance of leading in discussions on international standards for emerging and critical technologies like AI. Speaking during a Wednesday event hosted by the Information Technology Industry Council, Locascio said that implementing and adopting voluntary industry standards can help develop larger international standards.
Having the U.S. help create international guidance for the design and use of AI systems will keep the U.S. at the market forefront and ensure the Biden administration-favored, rights-based approach to AI guidance will be adopted globally, the director said.
“One of my priorities as the NIST director is ensuring stronger U.S. engagement in international standards, and AI is at the top of our list in the areas of focus,” she said.
Locascio added that the new consortium will help cultivate a “lasting approach” among stakeholders in deploying and creating safe AI systems.
“We need to ensure aligned approaches in the development and in the science of safe and trustworthy AI,” she said. “The consortium is a critical pillar of the Institute and it will ensure that the Institute's research…[is] integrated into the broad community.”
Elizabeth Kelly was named as the institute’s inaugural director, and Elham Tabassi will serve as the institute’s chief technology officer.
“The USAISI will advance American leadership globally in responsible AI innovations that will make our lives better,” Tabassi said in a statement. “I am thrilled to be part of this remarkable team, leading the effort to develop science-based, and empirically backed guidelines and standards for AI measurement and policy.”
NEXT STORY: TMF looks to fund AI projects