Making software programmers more security-minded from the start can reduce costs and headaches later on.
Information technology shops tend to learn about software security problems only after they have deployed their applications. Lucky organizations discover a flaw during a vulnerability scan. The unlucky ones find out when hackers exploit a software security flaw. In both cases, enlightenment comes too late. And, as a result, an IT manager may be looking at extensive software rework or the loss of sensitive data.
However, some IT groups aim to push security further upstream in the software development cycle. Their objective is to include security in software from the beginning rather than tacking it on at the end. This direction parallels the industrial sector’s movement away from inspecting the quality of finished products and toward initiatives that focus on addressing quality earlier, such as Statistical Process Control and Total Quality Management.
Although the manufacturing quality movement has existed for decades, security inspections in software development are just getting off the ground. Nonetheless, agencies can take advantage of secure development practices and an array of tools that review code, test for vulnerabilities and make software harder to reverse-engineer.
The obvious benefit of safe coding is improved security, but it could also yield significant cost savings.
“It’s really expensive to push out patches,” said Elaine Fedchak, a certified information systems security professional at the Defense Department’s Data and Analysis Center for Software, which provides guidance on software engineering practices. The cost of fixing an error made in the software requirements definition phase multiplies by a factor of 10 as one goes further along the life cycle, she said, citing works such as Roger Pressman’s “Software Engineering: A Practitioner’s Approach.”
The same escalation is true for security-related flaws and offers an incentive for early intervention, Fedchak added. “If you’ve planned for security in the requirements phase…and do the proper risk management, you will have a factor-of-10 savings.”
But organizations must make changes to culture, technology and processes to secure software development work.
“Federal Web developers oftentimes are paid for something that works and meets the functional requirements,” said Bill Geimer, director of information assurance at Open System Sciences and its program manager on an information security contract at the U.S. Agency for International Development. “We’re just getting to the point where security is a built-in requirement.”
Room for improvement
Indeed, the pressure to produce code often places security in the back seat, experts say. Software developers “are frequently pushed to get things out,” said Debra Banning, a principal at Booz Allen Hamilton. “There is a critical need for the software that they have. Sometimes, getting stuff done fast is the driver.”
Another factor in shaky software is developers who may not have a solid grounding in security. “Developers just don’t develop with security in mind,” said Caleb Sima, chief technology officer at SPI Dynamics, a maker of Web application security assessment products. “They definitely do not have enough knowledge of security.”
Consequently, they may introduce vulnerabilities into software during programming. One common problem is failure to set parameters to prevent buffer overflow attacks, said Samir Kapuria, director of strategic consulting at security vendor Symantec.
A buffer overflow occurs when a cyber assailant inputs more data than an application’s buffer — where an application temporarily holds data — can handle. Buffer overflows can crash systems and provide a springboard for additional attacks.
The correct input for a given application may require two characters. But if the size of the buffer remains undefined, an unlimited number of characters may be accepted and a malicious user can exploit the poor coding practice, Kapuria said.
But software flaws aren’t just about coding. Software designers and architects may neglect to incorporate adequate security at the requirements definition stage. “Most organizations define functionality and feature requirements, but it is important to consider…information security goals and known security issues,” Kapuria said.
In addition, software-testing procedures may fail to test for security.
“People do not test for these security vulnerabilities before they move their application out to production,” Sima said, adding that the traditional quality assurance task is to look for performance problems and functionality issues.
“A security issue is nothing more than a defect, but one with a much higher risk rating,” Sima said.
A better process
To bolser software security, IT groups must adopt practices that inspect security from the beginning of the development cycle.
The customer must first define the acceptable level of risk, said Steve Mattern, vice president of the Apogen Technologies’ Software and Systems Analysis Division.
“There is residual risk associated with anything we build,” said Scott Baker, director of technical services at Apogen. So an important task is to understand the severity of the identified risks and the likelihood that a given weakness will be exploited, he said.
At this point, IT managers may conduct a failure analysis to help shape the security requirements for the software they intend to develop. Baker said this analysis examines the system’s complexity and the vulnerabilities developers want to avoid.
“It’s just trying to wrap your arms around what you don’t want to happen and what you think you can do about it in the design,” Baker said.
An organization may decide to accept a particular risk if, for example, exposure is limited and the cost of remediation is too high. Otherwise, mitigation controls are devised for unacceptable risks and defined as security requirements. Those security features are then embedded into the software’s specifications.
Kapuria said IT departments should classify applications according to sensitivity and create a set of security requirements for each category. A piece of software involving classified data would be considered an extremely critical application and would have security requirements commensurate with its importance, he said.
Developers shift their attention to software architecture in the design phase. Software architecture describes the components of an application and their interfaces.
An out-of-date or incomplete architecture could introduce security problems. An application may be open to remote access at various points. But if those interfaces don’t appear in the architecture, the potential vulnerability goes unnoticed, said Nancy Mead, a senior member of the technical staff at the Computer Emergency Response Team Coordination Center.
“You need to start with a good architecture before you can build security into the process,” she said.
In the development phase, the security task is to make sure programmers follow secure coding practices. Programmers fresh from college may not be aware of what they shouldn’t do, Banning said.
Some organizations offer secure coding classes or hire a third-party company to provide the training. But Mead said those groups are in the minority. “Many organizations don’t really have a standardized education program for their staff for software engineering in general, let alone for security practices,” she said.
If they don’t have formal training programs, agencies can take simple steps to shore up their programming, industry executives say. Sima said developers who pay attention to input validation can remove vulnerabilities that lead to buffer overflows and other attacks. “If everything in the application is validated for input properly, you remove 80 percent of application vulnerabilities,” Sima said.
At the later stages of the development cycle, application testing also requires a security overhaul, experts say. Many developers focus on testing functionality — making sure the software works as intended.
Instead, IT departments should introduce failure testing, Baker said. The idea is to cause a system to fail and determine whether it reacts as planned. The failure-testing process aims to answer a number of questions about an application, Baker said. “How does it fail? Does it lock up? Does it annunciate to an operator that there is a breach? Does it allow the failure to occur?” he asked.
Overall, security calls for a different software-building philosophy. Developers tend to focus on how they want a system to work, but security specialists believe they should also consider how they don’t want that system to work.
NEXT STORY: Survey: Contractors caught in clearances pinch