Think tank report envisions a cyber ‘good place’ for AI and how to get there
Amid the ongoing rise of artificial intelligence technologies and their integration into digital networks, the Aspen Institute compiled a new list of cybersecurity recommendations for government and industry.
A think tank’s new artificial intelligence report aims to guide government and industry toward a “good place” in the future — where AI-based systems and other emerging tech can benefit digital defenses — and away from a “bad place,” where attackers reap the benefits instead.
A collaboration between the Aspen Institute’s U.S.-based and Global Cybersecurity Working Groups, the document functions as a roadmap and resource to help both public and private sector organizations mitigate security risks in leveraging AI while maximizing the advantages.
Seven recommendations act as pillars of the Aspen Institute’s guidance: stay true to cybersecurity principles; don’t live in a silo; proactively manage which decisions AI will be making; improve logging, log review and log maintenance; be intelligently transparent about AI; ensure your contracts contain AI rules of engagement; and be wary of the bandwagon effect.
While the report’s authors argue that organizations should adhere to standard cybersecurity practices, like multifactor authentication and ensuring compliance with partnering entities, they also highlight the need for teamwork between cybersecurity and AI technicians to improve transparency and broad understanding of AI softwares.
“The biggest impact AI has on cybersecurity for both attackers and defenders is how it transforms the business of both, how it impacts the economics and capabilities of both cyber criminals and defenders alike will collaborate and what kind of new division of labor based on what will lead to tactics and techniques being both more effective and more common,” said Yameen Huq, director of the U.S. Cybersecurity Group at Aspen Digital, in a press release.
The report notes that AI can facilitate cyberattacks by making them less expensive and improving their efficacy through quick identification of system vulnerabilities to exploit.
“Malicious AI platforms are developed in jurisdictions with few legal restrictions and then deployed around the world,” the report says. “Because criminals can innovate more freely, they’re able to design attacks that even AI-enabled defenses struggle against because of their novelty.”
The report recommends that government cybersecurity professionals consider identifying high-risk AI tools when implementing them into digital environments, promoting access to open-source cybersecurity resources and providing current public servants with educational opportunities to digitally upskill its workforce.
On the industry side, the report advises that organizations “stick to the basics” in cybersecurity protocols, including leveraging asset management, network access controls and continuous monitoring for system abnormalities.
But the report’s authors note that the suggested guidance may not be completely agnostic and apply to every individual entity, primarily due to the emerging and speculative nature of AI as a growing technology and industry.