DHS releases guidance for AI in critical infrastructure
The Department of Homeland Security worked with its diverse AI Safety and Security Board to develop a holistic approach for securing critical infrastructure that leverages AI technologies.
The Department of Homeland Security unveiled a new series of recommendations for the safe use of artificial intelligence tools in U.S. critical infrastructure, breaking down flexible guidance by each sector included in the broader AI supply chain.
Unveiled on Thursday, the Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure — created in consultation with Homeland’s internal Artificial Intelligence Safety and Security Board — tailors recommended actions to specific sectors key to the AI industry. These include cloud and compute infrastructure providers, AI developers, critical infrastructure owners and operators, civil society and public sector entities.
DHS Secretary Alejandro Mayorkas said this guidance is “groundbreaking” in regards to being the first such document that was created through “extensive collaboration” with a board that included AI developers themselves to help civil society responsibly deploy these softwares.
“The framework, if widely adopted, will go a long way to better ensure the safety and security of critical services that deliver clean water, consistent power, internet access and so much more,” Mayorkas said during a Thursday press call. “It is, quite frankly, exceedingly rare to have leading AI developers engaged directly with civil society on issues that are at the forefront of today's AI debates, and [it] presents such a collaborative framework.”
He added that the framework acts as a new model of shared responsibility and governance to safeguard critical infrastructure with AI technologies. The report goes on to identify specific recommendations that every individual stakeholder can adopt within their existing AI governance protocols.
“It is descriptive and not prescriptive,” Mayorkas said. “The evolution of the technology is still rapidly advancing, and to be particularly prescriptive would not necessarily capture that evolution. And we intend the framework to be, frankly, a living document and to change as developments in the industry change as well.”
The voluntary guidance for stakeholders working at the intersection of AI and critical infrastructure includes five steps: securing environments, driving responsible model design, implementing data governance, ensuring safe and secure deployment and monitoring performance and impact.
Different stakeholders will have different tasks within these steps. An AI software developer, for example, would be responsible for managing access to models and data to secure a given environment, whereas a critical infrastructure owner and operator would be responsible for securing existing IT infrastructure.
Given that the report’s contents are voluntary, Mayorkas noted that he expects board members to help evangelize the guidance into practice across their industries, especially ahead of an incoming Trump administration that stands to change current tech and AI policy.
“There is a great push for the promulgation of regulations, the enactment of legislation in the AI sphere,” he said. “We are leading the world in innovation in AI and broader technologies and this framework, if, in fact, adopted and implemented as broadly as we envision it will be –– and the board members envision it should be –– will and could ward off precipitous regulation, legislation that that does not move at the speed of business and debt and does not embrace and support our innovative leadership.”