AI could be tapped to design weapons of mass destruction, DHS warns
The new guidance was mandated by President Biden’s October 2023 executive order.
The Department of Homeland Security released a new report on the ways that chemical, biological, radiological and nuclear weapons could be misused via artificial intelligence, as well as new guidelines on securing critical infrastructure in the face of AI, a stipulation from President Joe Biden’s 2023 executive order on AI.
Announced on Monday, the new documentation has two primary objectives: establishing guidelines to mitigate AI risks to critical infrastructure; and mitigating the misuse in both the development and production of chemical, biological, radiological and nuclear materials and threats.
Released Monday, the critical infrastructure guidance focused its priorities on water supplies, power grids and telecommunications operations, which have been increasingly targeted by malicious cyber actors. It analyzed three categories of AI threats to critical infrastructure: attacks using AI, attacks targeting AI and failures in the design or implementation of AI systems.
“AI can present transformative solutions for U.S. critical infrastructure, and it also carries the risk of making those systems vulnerable in new ways to critical failures, physical attacks, and cyber attacks. Our Department is taking steps to identify and mitigate those threats,” said Secretary of Homeland Security Alejandro Mayorkas in a press release.
The critical infrastructure guidance offers mitigation strategies that rely on four pillars central to the National Institute of Standards and Technology’s AI Risk Management Framework: govern, map, measure and manage. Together, the four mitigation guidelines act as a holistic checks-and-balances approach to addressing AI technologies intertwined with systems managing sensitive infrastructure.
In the report on chemical, biological, radiological and nuclear threats, the DHS Countering Weapons of Mass Destruction Office and Cybersecurity and Infrastructure Security Agency analyzed the risk AI systems could pose when intersected with weapons of mass destruction and developed recommended steps to counter these emerging risks.
The report, submitted to the President, identifies trends within the growing AI field along with distinct types of AI and machine learning models that might enable or exacerbate biological or chemical threats to the U.S. It also includes national security threat mitigation techniques through oversight of the training, deployment, publication and use of AI models and the data used to create them — particularly regarding how safety evaluations and guardrails can be leveraged in these instances.
“The responsible use of AI holds great promise for advancing science, solving urgent and future challenges, and improving our national security, but AI also requires that we be prepared to rapidly mitigate the misuse of AI in the development of chemical and biological threats,” assistant secretary for CWMD Mary Ellen Callahan said in a press release. “This report highlights the emerging nature of AI technologies, their interplay with chemical and biological research and the associated risks, and provides longer-term objectives around how to ensure safe, secure, and trustworthy development and use of AI.”