Mitigating AI risks requires global cooperation, officials say

syafak/Getty Images

The U.S. is looking “to try and build international consensus around a set of norms” for countries’ uses of AI, a State Department official said on Thursday.

The United States is working with other nations to try and build global norms around the risks posed by artificial intelligence technologies, several government officials said during an event hosted by the Center for a New American Security on Thursday. 

Since President Joe Biden issued an executive order last October on the safe, secure and trustworthy use of AI, federal agencies have been working more closely with their international counterparts to support the development of novel capabilities while mitigating their potential dangers. 

Michael Kaiser — associate deputy assistant secretary for policy, strategy and analysis at the Department of Homeland Security’s Countering Weapons of Mass Destruction Office — said AI poses a number of “dual use” challenges, with the emerging capabilities potentially applied to create new biological weapons but also having the ability to drive societal breakthroughs. 

Kaiser noted that his office was designated to produce a report about reducing the risks of AI when it comes to “chemical, biological, radiological and nuclear threats,” which was delivered to Biden in April. 

He said the report’s very first finding was “about building consensus among these different communities — the national security community, public health, scientific, even food and agricultural communities — to understand what is the actual real level of risk based on scientific principles, as well as understanding the capabilities of adversaries to use biological agents, in particular, to conduct attacks against the homeland.”

The State Department has also worked to create international norms around the use of AI tools by issuing a political declaration on “Responsible Military Use of Artificial Intelligence and Autonomy.” The declaration, first launched in February 2023, has been endorsed by 54 nations, including the U.S., as of May 29. 

Wyatt Hoffman, a foreign affairs officer in State’s Office of Emerging Security Challenges, said the department’s Bureau of Arms Control, Deterrence and Stability is working “to try and build international consensus around a set of norms” that will guide countries’ uses of AI, particularly for military purposes. 

“A lot of what we're focused on with the political declaration are defining those practices that would actually have a tangible impact in reducing those risks,” he said, adding that State is particularly interested in preventing “unintentional failures or misplaced confidence in the reliability of AI capabilities.”

Efforts to collaboratively agree on mitigating AI’s potential downsides have also extended to nuclear command and control guardrails. 

Hoffman noted that the U.S, France and the United Kingdom have committed “to maintain human control and involvement for all actions critical to informing and executing sovereign decisions concerning nuclear weapons employment.”

Thursday's discussion was held after CNAS released a report last month that analyzed the risks of AI to national security. 

Bill Drexel, a fellow with CNAS’s tech and national security team who co-authored the report, said that the study recommended, in part, that officials “plan for catastrophes abroad — especially from China — that may impact the United States related to AI catastrophic risks.”

While relations between the U.S. and China remain contentious, Kaiser said State has “made it clear that we are open to dialogue with the [People's Republic of China] on military uses of AI and responsible military use of AI, and how to mitigate those risks.”

He added that “there have been some discussions with China” about working to minimize AI’s risks and that there is “a shared interest in addressing some of the risks to strategic stability, the risks of unintended engagements.”