Biden signs first national security memorandum focused on AI
The document aims to balance artificial intelligence innovation and adoption with protections for privacy and civil liberties.
The Biden administration unveiled the first national security memorandum solely focused on artificial intelligence on Thursday, preparing for the impact AI stands to have –– and already has –– on geopolitics and diplomacy.
Central to the memorandum is ensuring that the U.S. continues leadership in both the development and deployment of AI technologies. While this means strengthening supply chains vital to AI components — like semiconductors — and ensuring dominance in AI innovation, the memorandum also prioritizes vigilance regarding which nations are adopting AI technologies while maintaining a strong human rights-preserving approach.
“The emphasis in the national security memorandum is really making commitments ourselves as a government about how we will adopt and use artificial intelligence,” a senior administration official told reporters during a Wednesday press call.
National Security Advisor Jake Sullivan discussed the memorandum in remarks delivered at Ft. McNair in Washington D.C. on Thursday.
“We want the United States…to take responsible steps to ensure fair competition and open market to protect privacy, human rights, civil rights, civil liberties, to make sure that advanced AI systems are safe and trustful, to implement safeguards so that AI isn't used to undercut us,” he said.
Key provisions outlined in the memorandum include further empowering the National AI Research Resource — a program run by the National Science Foundation to expand access to AI innovation — directing further international collaboration, increasing cybersecurity information sharing with private sector counterparts, and formally designating the National Institute of Standards and Technology’s AI Safety Institute as industry’s primary contact for government partners.
The administration also released a companion Framework for AI Governance and Risk Management for National Security. The guidance included in the accompanying framework mirrors the European Union’s landmark AI Act in that it identifies high impact AI use cases based on the risk they pose to national security, human and civil liberties, privacy and other concerns.
The provisions in the framework will only apply to federal agencies. Some of the specific restrictions in the framework include precluding the removal of a human from the operation of an AI tool meant to help inform decisions made by the U.S. president, such as decisions related to nuclear weaponry.
As the Biden administration continues to thread the needle between innovation and security in the AI sector, officials hope that the framework’s clarity will spur research and development in safe directions.
“We actually view these restrictions, as well as the high impact cases, as being important in clarifying what the agencies can and cannot do that will actually accelerate experimentation and adoption,” the administration official said. “One of the paradoxical outcomes we've seen is with a lack of policy clarity and a lack of legal clarity about what can and cannot be done, we are likely to see less experimentation and less adoption than with a clear path for use, which is what the NSM and the framework provide.”
Sullivan echoed these comments, emphasizing the role domestic innovation and competition –– including heightened domestic adoption –– will play in cultivating international authority in all things AI.
“We’re in an age of strategic competition where we have to compete vigorously and also mobilize partners to solve great challenges that no one country can solve on its own,” he said. “In this age, in this world, the application of artificial intelligence will define the future.”