The Federal Government is Pushing for Security-Aware Developers
But do official recommendations work in their world?
In the past two years, we have seen the cybersecurity world forced to “grow up” quickly. The approach to security in the supply chain has been especially prominent, requiring fast-tracked solutions to successfully align with accelerating official mandates and recommendations. The SolarWinds and Log4j incidents certainly helped this along, but the truth is, this scrutiny was much-needed and a long time coming.
Mercifully, the U.S. government has taken multiple steps to zero in on several problem areas in the software supply chain, and continued focus on security has ensured it is a mainstay in the media, not to mention the minds of leaders in the space. One such area has been the issue of developer influence in the supply chain, and the need for their security skills to be verified if they’re working for a software vendor servicing the government. Make no mistake: vulnerabilities and misconfigurations affect the private sector just as much as any government department around the world, and these measures have the potential to make a positive impact on developer security awareness, if they are effective and widely adopted.
As someone who has dedicated much of their career to developer security enablement, a spotlight on an issue of this magnitude might seem cause for celebration, but I won’t be popping the champagne just yet. After all, instilling security best practices in the development cohort is much easier said than done, and the industrywide cultural change required for consistency to make it a new standard is simply not where it needs to be. Much like other general guidelines aimed at developer-driven security, they tend to lack the specifics needed to truly penetrate the traditional barriers that have made it a difficult sell in the first place.
Do the guidelines adequately consider a realistic developer workflow?
Many of the key obstacles in the way of developer-driven security practices directly relate to how they integrate with developers’ long-established workflow and ways of operating in a high-pressure environment. Simply plonking security responsibilities on top of every other goal they are required to achieve hasn’t resulted in success, and doing the same thing with security training—especially anything mandatory—is just another nail in the coffin for any affection a developer may have had towards secure coding in the first place.
We know from our own independent research that 86% of developers knowingly ship vulnerable code, and this is wildly at odds with what the U.S. government, at least, is attempting to declare as an acceptable level of security awareness and skill for a development team.
The guidelines state, “Developers should take regular and relevant security training, both for common topics and those deemed necessary for the individual role. Successful completion should be tracked for all engineers. Organizations should ensure individuals complete security training commensurate with the impact level of the system and software to which the individuals are assigned.” This is ideal, but naturally, easier said than done, especially in organizations where security maturity and culture are lacking.
In the pressure cooker environment of modern software development, fast deployment of features reigns supreme, and, overwhelmingly, multiple studies—including our own—have revealed that developers simply have no time to train. The GitLab 2022 Global DevSecOps Survey showed that 35% of developers are releasing code twice as fast compared to three years ago, and demand shows no signs of slowing.
Another concern is that, much like previous guidelines from NIST and others, there are few instructions on the types of training that would be effective. Many organizations struggle with finding right-fit security training and tools, and I don’t think this is any different. If training solutions eat away too much time—or time is not allocated in the work day in the first place—there is very little chance it will be embraced. Similarly, if it requires crazy context-switching to access, has no relevance to their actual work, and is easy to ignore, the effort to roll it out at all has to be questioned.
Does this achieve security nirvana—i.e. security at speed?
If you asked any development team what their primary goal would be, “software delivery at speed” is likely priority one for most. Securing the software supply chain is not front-of-mind for developers, and realistically, it’s not entirely helpful for secure coding—or developer-led security best practices—to get quite that granular, when so many have so few opportunities to learn even the foundational basics.
Introducing sweeping, restrictive security responsibilities for developers who don’t have the time, expertise or ongoing training needed to successfully mitigate common vulnerabilities will inevitably slow production to a crawl, and this will not fly in most government departments, let alone private enterprises. These recommendations need careful planning to implement, and organizations would do well to assess them in relation to their most pressing and relevant security issues.
A surprising element of the guidelines was that they detailed fairly complex areas of security that developers were expected to cover, such as insecure design principles and advanced threat modeling techniques, but they remained rather limited in the prescribed directions for code-level vulnerabilities, for example: “Software development group managers should ensure that the development process prevents the intentional and unintentional injection of malicious code or design flaws into production code.”
With security misconfigurations causing 80% of all data breaches, according to Gartner data, and API access control also a hot attack vector, this is somewhat simplistic, and teaching how to mitigate those scenarios alone won’t prepare a developer adequately for advanced threat modeling tasks, as an example.
Steps for an enterprise-friendly approach
The bottom line is that we all need to care more about security, and these guidelines are a meaningful, commendable step in the right direction. While they are supply chain-centric, over time, it is likely that these recommendations will be cited increasingly in the private sector, dictating security awareness pathways for development teams, and it is at this point that many will find they need refinement to function in a realistic way.
Organizations should first make an honest assessment of their security maturity, and I’d recommend the following pointers to start making these government suggestions work for you:
- Find holistic training, and stay serious about it. Generic training that is not job-relevant, or even in the languages and frameworks used by the development cohort, will be next to useless in enabling developers to solve common security problems. The idea is to correct poor coding patterns, and they need hands-on, relevant, and engaging training.
- Don’t bombard the development team. A surefire way to upset the developers would be to make security part of their KPIs overnight, make them use disruptive tools and expect them to be ready to start preparing comprehensive threat modeling documentation in an instant. The guidelines don’t address skill-building and foundational learning very well, so it’s up to security professionals and managers to implement viable pathways where learning can occur in layers, build upon existing knowledge, and be continually assessed.
- Incentivize developers to build a security-first mindset. Security-aware developers are highly sought after, and provide the often unsung backbone to security best practices in an organization. They should be rewarded, shouted out to their peers and given ample opportunity to advance their careers.
Pieter Danhieux is the CEO and co-founder of Secure Code Warrior.
NEXT STORY: Zero Trust Doesn’t Come in a Box