Success of the AI national security memo ‘will be in the implementation,’ industry says

Valerie Plesch/picture alliance via Getty Images

Industry leaders aren’t shying away from more AI guardrails, but note oversight into the memo’s execution is important.

Following yesterday’s release of the Biden administration’s first artificial intelligence-centric national security memorandum, technology policy analysts and advocates are paying close attention to the efficacy of the memo’s implementation. 

As the memo directs a series of actions for the federal government to execute that will contribute to securing U.S. leadership in AI innovation –– including supply chain security, forming a new specialized coordination group and streamlining visa processes for applicants with STEM backgrounds –– policy experts are pushing for firm oversight into these actions’ deployments.

“The use of AI in the national security context involves important decisions affecting fundamental rights and liberties,” Samir Jain, VP of policy at the Center for Democracy and Technology said in comments sent to Nextgov/FCW. “The national security memorandum takes important and meaningful steps to protect those rights, though the proof will be in the implementation. It is critical that uses of AI are subject to democratic accountability notwithstanding legitimate needs for secrecy; we cannot rely on national security agencies to grade their own homework.” 

Identifying specific high-impact use cases, a task established by the memo’s companion Framework to Advance AI Governance and Risk Management in National Security, was also well-received by industry leaders. 

Dave Prakash, head of AI governance at Booz Allen, told Nextgov/FCW that spotlighting prohibited high-risk use cases helps build a stronger path forward in establishing routine risk management and AI governance practices. 

Critically, Prakash added that research and development in AI will not be hindered by these regulations, a balance both lawmakers and policy officials have been working to strike. 

“We believe that these measures will not obstruct innovation but rather enhance the public trust and confidence in AI systems used by the U.S. government for national security, thereby accelerating AI adoption in the long run,” he said. 

But not all industry players trust that regulations will toe the line between safety and strangling innovation.

“One of the biggest threats to U.S. leadership in AI — and consequently to national security — comes not from foreign actors but from U.S. regulators. The oversight of this memo does not address this risk directly. In particular, efforts by antitrust regulators to break up leading U.S. tech companies and investigate U.S. AI chipmakers would hurt U.S. competitiveness in AI and help strategic competitors like China pull ahead,” said Center for Data Innovation Director and Information Technology and Innovation Foundation Vice President Daniel Castro.

He noted that stricter regulations out of allies like the EU, stand to hinder U.S. competitiveness in AI, meaning the memo’s call for international AI governance “rings hollow.”

“Additionally, attempts to influence the Global South through normative frameworks are overshadowed by China’s economic diplomacy and infrastructure investments. Consequently, this part of the memo appears to be more aspirational than practical,” Castro added.

But the memo’s instruction to publish more guidance related to AI and its impacts on cybersecurity protocols was a welcome inclusion. Melissa Ruzzi, the director of AI at software-as-a-service company AppOmni, said that making decisions based on data will be critical to gauging the efficacy of the memo’s provisions. 

“The actions listed in the memo are great starting points to get a good picture of the status quo and obtain enough information to make decisions based on data,” Ruzzi said in a statement to Nextgov/FCW.  “The data…[that] needs to be collected on the actions is not trivial, and even with the data, assumptions and trade-offs will be necessary for final decision making. Making decisions after data gathering is where the big challenge will be.”