House lawmakers press for transparency in NIST’s new AI grant funding
A bipartisan cadre of lawmakers is asking for details on the funding methods and oversight of NIST’s new Artificial Intelligence Safety Institute.
House lawmakers from the Science, Space and Technology Committee intend to learn more about how the new Artificial Intelligence Safety Institute within the National Institute of Standards and Technology will function, particularly regarding the risks associated with fledgling AI systems.
A Dec. 14 letter from a bipartisan group spearheaded by Ranking Member Zoe Lofgren, D-Calif., and Chairman Frank Lucas, R-Okla., to NIST Director Laurie Locascio expresses concern about the AISI potentially funding organizations outside of the government where research transparency is not guaranteed.
“We applaud the NIST for its important efforts to guide the responsible development and deployment of trustworthy artificial intelligence,” the letter reads. “Unfortunately, the current state of the AI safety research field creates challenges for NIST as it navigates its leadership role on the issue.”
NIST’s leadership as the federal government’s AI research and standards quarterback was cemented in President Joe Biden’s October AI executive order, which delegated multiple new tasks and initiatives under the purview of the agency. AISI is one of the new initiatives within NIST that aims to help drive both AI innovation and policy at a federal level.
Among the objectives of AISI is to authorize funding as grants or awards to external, nongovernmental entities. Lofgren and Lucas, along with Reps. Haley Stevens, D-Mich., Mike Collins, R-Ga., Valerie Foushee, D-N.C., and Jay Obernolte, R-Calif., highlight the need for oversight into the final research products that result from these grants.
The primary concern is that a potential AISI fund recipient may not adhere to the best practices of the scientific discovery process.
“Findings within the [AI] community are often self-referential and lack the quality that comes from revision in response to critiques by subject matter experts,” the letter continues. “There is also significant disagreement within the AI safety field of scope, taxonomies, and definitions.”
Examples of the types of funding pitfalls lawmakers want to avoid include awarding research that does not mitigate risks effectively, uses evaluation methods that lack validity or produces research behind a veil of corporate secrecy.
While NIST begins to open and operate AISI, including its funding efforts, lawmakers ask for a staff briefing from NIST on the office’s grant process and how funds are likely to be leveraged.
“As NIST prepares to fund extramural research on AI safety, scientific merit and transparency must remain a paramount consideration,” the letter states.