White House official tees up AI executive order
Deputy National Security Advisor Anne Neuberger spoke on the executive branch’s ambitions in spurring guardrails for AI systems, emphasizing on a collaborative mentality and helping bridge regulations in Congress.
A senior White House security official previewed the Biden administration’s forthcoming efforts to apply regulations and oversight to emerging technologies like artificial intelligence, including a pending executive order and more international partnerships.
Speaking during the Chamber of Commerce’s GlobalAI discussion Wednesday morning, Deputy National Security Advisor Anne Neuberger discussed the “promise and peril” of rapidly-evolving AI systems, and how federal officials are attempting to thread the needle between innovation and mitigation.
The marriage of AI algorithms to aid and improve cybersecurity is one step the executive branch is investigating.
“The tech we all rely on is still not built as secure as needed,” she said. “And we see the opportunity to train AI-ML's to help generate more secure code.”
Neuberger said that weaving AI-driven code into digital networks can help patch vulnerabilities quickly, thereby preventing lapses in cybersecurity. Part of advancing the cybersecurity use case for AI is the forthcoming Counter Ransomware Initiative, which Neuberger described as the largest global cyber partnership in the world, with 47 countries participating.
Within next month’s meeting of the Counter Ransomware Initiative, a cyber challenge related to analyzing blockchain systems to better track cryptocurrency-related ransomware will also demonstrate the ways AI can fight cyberthreats.
She also highlighted the various public health and agricultural use cases for AI systems to support, which would largely be buttressed by continued collaborations between the public and private sectors –– especially related to advancing the secure-by-design principles several agencies have supported.
“Those are important commitments and that will need to continue to evolve to really make them real, not only to ensure those AI models are as safe as they need to be before they are used–– and that's critical––and then in an ongoing way, but also that the broader public trust them because that trust is critical to their use,” she said.
One of the tasks the ongoing public-private collaborations can take on are battling deepfakes and other AI-generated content that mimics real news. More regulatory exploration happening in Congress, as well as an executive order slated to be released this fall, will help keep AI technologies built with security as a grounding design principle, she added.
“It [the executive order] is incredibly comprehensive,” Neuberger said. “It's a bridge to regulation because it pushes the boundaries and is only within the boundaries of what's permissible…by law.”