Feds beware: New studies demonstrate key AI shortcomings

da-kuk/Getty Images

Recent studies have started to show that there are serious downsides when it comes to such programs’ ability to produce secure code.

It’s no secret that artificial intelligence is almost everywhere these days. And while some groups are worried about potentially devastating consequences if the technology continues to advance too quickly, most government agencies are pretty comfortable adopting AI for more practical purposes, employing it in ways that can help advance agency missions.

And the federal government has plenty of guidelines in place for using AI. For example, the AI Accountability Framework for Federal Agencies provides guidance for agencies that are building, selecting or implementing AI systems. According to GAO and the educational institutions that helped to draft the framework, the most responsible uses of AI in government should be centered around four complimentary principals. They include governance, data, performance and monitoring.

Writing computer code, or monitoring code written by humans to look for vulnerabilities, fits within that framework. And it’s also a core capability that most of the new generative AIs easily demonstrate. For example, when the most popular generative AI program, ChatGPT, upgraded to version 4.0, one of the first things that developer OpenAI did at the unveiling was to have the AI write the code to quickly generate a live webpage.

Given how quickly most generative AIs can code, it’s little wonder that according to a recent survey by GitHub, more than 90% of developers are already using AI coding tools to help speed up their work. That means that the underlying code for most applications and programs being created today is at least partially made by AI, and that includes code that is both written or used by government agencies. However, while the quick pace that AI is able to generate code is impressive, recent studies have started to show that there are serious downsides that come along with that speed, especially when it comes to security.

Trouble in AI coding paradise

The new generative AIs have only been successfully coding for, at most, a couple of years depending on the model. So, it’s little wonder that evaluations of their coding prowess are slow to catch up. But studies are being conducted, and the results don’t bode well for the future of AI coding, especially for mission critical areas within government, at least without some serious improvements.

While AIs are generally able to quickly create apps and programs that work, many of those AI-created applications are also riddled with cybersecurity vulnerabilities that could equate to huge problems if dropped into a live environment. For example, in a recent study conducted by the University of Quebec, researchers asked ChatGPT to generate 21 different programs and applications in a variety of programming languages. While every single one of the created applications the AI coded worked as intended, only five of them were secure from a cybersecurity standpoint. The rest had dangerous vulnerabilities that attackers could easily use to compromise anyone who deployed them. 

And these were not minor security flaws either. They included almost every single vulnerability listed by the Open Web Application Security Project, and many others.

In an effort to find out why AI coding was so dangerous from a cybersecurity standpoint, researchers at the University of Maryland, UC Berkeley and Google decided to switch things up a bit and task generative AI not with writing code, but with examining already assembled programs and applications to look for vulnerabilities. That study used 11 AI models, which were each fed hundreds of examples of programs in multiple languages. Applications rife with known vulnerabilities were mixed in with other code examples which were certified as secure by human security experts.

The results of that study were really bad for the AIs. Not only did they fail to detect hidden vulnerabilities, with some AIs missing over 50% of them, but most also flagged secure code as being vulnerable when it was not, leading to a high rate of false positives. It seems that those dismal results even surprised the researchers, who decided to try and correct the problem by training the AIs in better vulnerability detection. They fed the generative AIs thousands of examples of both secure and insecure code, along with explanations whenever a vulnerability was introduced.

Surprisingly, that intense training did little to improve AI performance. Even when expanding the large language models the AIs used to look for vulnerable code, the final results were still unacceptably bad both in terms of false positives and letting vulnerabilities slip through undetected. That led the researchers to conclude that no matter how much they tweaked the models, that the current generation of AI and “deep learning is still not ready for vulnerability detection.” 

Why is AI so bad at secure coding?

All of the surveys referenced here are relatively new, so there is not a lot of explanation yet as to why generative AI, which performs well at so many tasks, would be so bad when it comes to spotting vulnerabilities or writing secure code. The experts that I talked with said the most likely reason is that generative AIs are trained on thousands or even millions of examples of code written by humans that come from open sources, code libraries and other repositories, and much of that is heavily flawed. Generative AI may simply be too poisoned by all those bad examples used in its training to redeem. Even when researchers from the University of Maryland and UC Berkeley study tried to correct the models with fresh data, their new examples were just a drop in the bucket, and not nearly enough to improve performance.

One study conducted by Secure Code Warrior did try and address this question directly with an experiment that selectively fed generative AIs specific examples of both vulnerable and secure code, tasking them with identifying any security threats. In the case of that study, the difference between the secure and vulnerable code examples presented to the AIs were very subtle, which helped researchers determine what factors were specifically tripping up the AIs when it came to vulnerability detection in code.

According to SCW, one of the biggest reasons that generative AIs struggle with secure coding is a lack of contextual understanding about how the code in question fits in with larger projects or the overall infrastructure, and all of the subsequent security issues that can stem directly from that. They give several examples to prove this point, where a snippet of code should be considered secure if it is being used to trigger a standalone function but then becomes vulnerable with business logic flaws, improper permissions or security misconfigurations when integrated into a larger system or project. Since the generative AIs don’t generally understand the context of how code they are examining is being used, it will often flag secure code as vulnerable or code that has vulnerabilities as safe. 

In a sense, because an AI does not know the context of how code will be used, it sometimes ends up guessing about its vulnerability status, since AIs almost never admit that they don’t know something. The other area that AIs struggled with in the SCW study was when a vulnerability came down to something small, like the order of various input parameters. Generative AIs may simply not be experienced enough to know how something small like the order of input parameters in the middle of a large snippet of code can lead to security problems.

The study does not offer up a solution for fixing an AI’s inability to spot insecure code, but does say that generative AI could still have a role in coding, but only when paired tightly with experienced human developers who can keep a watchful eye on their AI companions. For now, without a good technical solution, that may be the best path forward for agencies that need to tap into the speed that generative AI can offer when coding, but can’t accept the risks that come along with unsupervised AI acting independently when creating government applications and programs.

John Breeden II is an award-winning journalist and reviewer with over 20 years of experience covering technology. He is the CEO of the Tech Writers Bureau, a group that creates technological thought leadership content for organizations of all sizes. Twitter: @LabGuys