AI provides a net advantage to federal cyber defenders — if they can use it
COMMENTARY | If agencies don’t invest and innovate internally and in partnership with industry and the research community, the cyber balance in the use of AI/ML can tip towards the attacker.
Artificial intelligence is a double-edged sword for federal cybersecurity. AI has empowered malicious cyber actors in ways we could not have imagined a few years ago. The administration's executive order on AI, signed by President Joe Biden in October, noted AI’s potential to enable powerful offensive cyber operations through automated vulnerability discovery and exploitation against a wide range of potential targets of cyber attacks.
AI levels the playing field for those targeting federal networks, enabling a malicious cyber actor with limited expertise or funding to develop and field cyber attacks that previously required the resources of a nation-state advanced persistent threat actor. And these more sophisticated APT actors can attempt to undermine the integrity of the AI models used by the federal government through data or model poisoning—intentionally polluting the data used to train AI to corrupt its integrity and degrade its accuracy.
How malicious cyber actors are using AI
AI is giving cyber criminals more powerful tools and capabilities. For example, generative AI can make AI spear phishing attacks plausible enough to fool even the most security-savvy user. When hackers compromise your email account, they may not merely infiltrate your address book, they may also steal enough content to enable generative AI to tailor the topics and tone of the spear phishing emails they will launch from your compromised account to match those you used previously with each of your contacts. When spoofed emails become virtually indistinguishable from the real thing, it becomes more difficult to expect users to be your first line of defense — the proverbial ‘human firewall’.
Generative AI can also be used to eliminate programming skill as a prerequisite for a would-be malicious cyber actor, since large language model AI tools have been successfully used to write malware. AI is also being used to rapidly find ways to exploit software vulnerabilities once they are publicly known, and this gives malicious actors increased potential to ‘weaponize’ and exploit these vulnerabilities faster than most users can apply vendor patches or updates. AI also has the potential to help malicious actors to discover new vulnerabilities in part. This is an example of a dual use problem since developing the capability to find such vulnerabilities is also a priority for programmers and security professionals focused on supply chain security and secure code development.
AI-powered big data analytics has also changed the way malicious cyber actors look at and target information to steal. Taking large data sets such as the Office of Personnel Management Security Standard Form 86 repository of approximately 22 million records made little sense regardless of the importance or sensitivity of that information when there was no viable way for it to be analyzed and exploited. The rise of AI and growing computing power over the past decade have made large data sets of all varieties exploitable – and therefore priority targets to be defended by federal agencies.
AI as a net boon to cyber defenders
While the dual use nature of AI has certainly helped would-be malicious cyber actors, AI is also an essential ingredient in creating enterprisewide visibility into potential malicious cyber activity and in powering automation of both cybersecurity and network performance. Cybersecurity professionals use the phrase ‘attack surface’ to describe the size and complexity of the digital environment and their difficulty in fully understanding or mapping it. The growing ability of cybersecurity and IT tools to share data and the rise of ecosystems of interoperability such as cybersecurity mesh architectures means devices across an agency can generate data that allows AI to characterize normal and abnormal –and potentially malicious—activity in real time and to drive an automated response. At their heart, AI and machine learning are powered by big data. In general, who is better positioned to have more information about an agency’s IT environment—the people who administer and defend it or those trying to break into it from the outside as a relatively unknown ‘black box’?
Teams at agencies are starting to take steps to integrate AI into their cyber defenses. Automation is vital for detecting and defending against threats — a way of turning the size and complexity of the attack surface from a liability into a potential collection tool that can spot malicious activity when it is launched and stop it or mitigate its impact. AI-powered automation can be used for rote tasks such as letting machines identify anomalies and execute predetermined responses, freeing analysts to do more satisfying and impactful work — tasks that require the human mind and human judgment.
In recognition of the growing importance of AI and ML to cybersecurity, Federal leaders have brought AI to the forefront of a number of governmentwide initiatives beyond the recent AI Executive Order. In the section of the 2023 National Cyber Strategy that described strategic objectives, the White House called for expanding federal research and development efforts for cybersecurity with a particular emphasis on projects that use artificial intelligence. In August, the Biden Administration launched a two-year competition to harness AI to protect the United States’ most important software, such as code that helps run the Internet and our critical infrastructure.
It’s a race
On balance, there are niches of asymmetric advantage that favor the attacker, such as the ability of generative AI to make spearphishing content that will fool even cyber savvy recipients. There are dual use areas such as AI-powered discovery of vulnerabilities in computer code that can be used by both enterprises and would-be attackers. And there are elements such the growing convergence between networking and security and the rise of greater interoperability of data and tools or architectures that favor the defender. Federal agencies have the home field advantage in this contest. In a competitive race, AI and ML offer federal cyber defenders more benefit than they give to attackers. But like any race, this assumes that both players are on the field and actively racing. If agencies don’t invest and innovate internally and in partnership with industry and the research community, the cyber balance in the use of AI/ML can tip towards the attacker. But it is our race to lose.