Preventive measures

Making software programmers more security-minded from the start can reduce costs and headaches later on.

Information technology shops tend to learn about software security problems only after they have deployed their applications. Lucky organizations discover a flaw during a vulnerability scan. The unlucky ones find out when hackers exploit a software security flaw. In both cases, enlightenment comes too late. And, as a result, an IT manager may be looking at extensive software rework or the loss of sensitive data.

However, some IT groups aim to push security further upstream in the software development cycle. Their objective is to include security in software from the beginning rather than tacking it on at the end. This direction parallels the industrial sector’s movement away from inspecting the quality of finished products and toward initiatives that focus on addressing quality earlier, such as Statistical Process Control and Total Quality Management.

Although the manufacturing quality movement has existed for decades, security inspections in software development are just getting off the ground. Nonetheless, agencies can take advantage of secure development practices and an array of tools that review code, test for vulnerabilities and make software harder to reverse-engineer.

The obvious benefit of safe coding is improved security, but it could also yield significant cost savings.

“It’s really expensive to push out patches,” said Elaine Fedchak, a certified information systems security professional at the Defense Department’s Data and Analysis Center for Software, which provides guidance on software engineering practices. The cost of fixing an error made in the software requirements definition phase multiplies by a factor of 10 as one goes further along the life cycle, she said, citing works such as Roger Pressman’s “Software Engineering: A Practitioner’s Approach.”

The same escalation is true for security-related flaws and offers an incentive for early intervention, Fedchak added. “If you’ve planned for security in the requirements phase…and do the proper risk management, you will have a factor-of-10 savings.”

But organizations must make changes to culture, technology and processes to secure software development work.

“Federal Web developers oftentimes are paid for something that works and meets the functional requirements,” said Bill Geimer, director of information assurance at Open System Sciences and its program manager on an information security contract at the U.S. Agency for International Development. “We’re just getting to the point where security is a built-in requirement.”

Room for improvement

Indeed, the pressure to produce code often places security in the back seat, experts say. Software developers “are frequently pushed to get things out,” said Debra Banning, a principal at Booz Allen Hamilton. “There is a critical need for the software that they have. Sometimes, getting stuff done fast is the driver.”

Another factor in shaky software is developers who may not have a solid grounding in security. “Developers just don’t develop with security in mind,” said Caleb Sima, chief technology officer at SPI Dynamics, a maker of Web application security assessment products. “They definitely do not have enough knowledge of security.”

Consequently, they may introduce vulnerabilities into software during programming. One common problem is failure to set parameters to prevent buffer overflow attacks, said Samir Kapuria, director of strategic consulting at security vendor Symantec.

A buffer overflow occurs when a cyber assailant inputs more data than an application’s buffer — where an application temporarily holds data — can handle. Buffer overflows can crash systems and provide a springboard for additional attacks.

The correct input for a given application may require two characters. But if the size of the buffer remains undefined, an unlimited number of characters may be accepted and a malicious user can exploit the poor coding practice, Kapuria said.

But software flaws aren’t just about coding. Software designers and architects may neglect to incorporate adequate security at the requirements definition stage. “Most organizations define functionality and feature requirements, but it is important to consider…information security goals and known security issues,” Kapuria said.

In addition, software-testing procedures may fail to test for security.

“People do not test for these security vulnerabilities before they move their application out to production,” Sima said, adding that the traditional quality assurance task is to look for performance problems and functionality issues.

“A security issue is nothing more than a defect, but one with a much higher risk rating,” Sima said.

A better process

To bolser software security, IT groups must adopt practices that inspect security from the beginning of the development cycle.

The customer must first define the acceptable level of risk, said Steve Mattern, vice president of the Apogen Technologies’ Software and Systems Analysis Division.

“There is residual risk associated with anything we build,” said Scott Baker, director of technical services at Apogen. So an important task is to understand the severity of the identified risks and the likelihood that a given weakness will be exploited, he said.

At this point, IT managers may conduct a failure analysis to help shape the security requirements for the software they intend to develop. Baker said this analysis examines the system’s complexity and the vulnerabilities developers want to avoid.

“It’s just trying to wrap your arms around what you don’t want to happen and what you think you can do about it in the design,” Baker said.

An organization may decide to accept a particular risk if, for example, exposure is limited and the cost of remediation is too high. Otherwise, mitigation controls are devised for unacceptable risks and defined as security requirements. Those security features are then embedded into the software’s specifications.

Kapuria said IT departments should classify applications according to sensitivity and create a set of security requirements for each category. A piece of software involving classified data would be considered an extremely critical application and would have security requirements commensurate with its importance, he said.

Developers shift their attention to software architecture in the design phase. Software architecture describes the components of an application and their interfaces.

An out-of-date or incomplete architecture could introduce security problems. An application may be open to remote access at various points. But if those interfaces don’t appear in the architecture, the potential vulnerability goes unnoticed, said Nancy Mead, a senior member of the technical staff at the Computer Emergency Response Team Coordination Center.

“You need to start with a good architecture before you can build security into the process,” she said.

In the development phase, the security task is to make sure programmers follow secure coding practices. Programmers fresh from college may not be aware of what they shouldn’t do, Banning said.

Some organizations offer secure coding classes or hire a third-party company to provide the training. But Mead said those groups are in the minority. “Many organizations don’t really have a standardized education program for their staff for software engineering in general, let alone for security practices,” she said.

If they don’t have formal training programs, agencies can take simple steps to shore up their programming, industry executives say. Sima said developers who pay attention to input validation can remove vulnerabilities that lead to buffer overflows and other attacks. “If everything in the application is validated for input properly, you remove 80 percent of application vulnerabilities,” Sima said.

At the later stages of the development cycle, application testing also requires a security overhaul, experts say. Many developers focus on testing functionality — making sure the software works as intended.

Instead, IT departments should introduce failure testing, Baker said. The idea is to cause a system to fail and determine whether it reacts as planned. The failure-testing process aims to answer a number of questions about an application, Baker said. “How does it fail? Does it lock up? Does it annunciate to an operator that there is a breach? Does it allow the failure to occur?” he asked.

Overall, security calls for a different software-building philosophy. Developers tend to focus on how they want a system to work, but security specialists believe they should also consider how they don’t want that system to work.

Helpful resourcesA number of resources dealing with secure software development are in various stages of completion in the government sector. Here is a sampling.
  • The Homeland Security Department and Carnegie Mellon University’s Software Engineering Institute in October launched the Build Security In Web portal (buildsecurityin.us-cert.gov) in October 2005. The Web site aims to bring together best practices, tools and other software assurance resources. Nancy Mead, a senior member of the technical staff at SEI’s Computer Emergency Response Team Coordination Center, said the Web site will expand this year. New areas may include security governance/management, penetration testing and security assurance cases.

  • The Data and Analysis Center for Software (www.dacs.dtic.mil) and the Information Assurance Technology Analysis Center (iac.dtic.mil/iatac) plan to compile a report on software assurance through secure software engineering. The report will cover methods, tools and best practices. It will also point to resources such as Build Security In. DACS and IATAC are information analysis centers operating under the Defense Technical Information Center. Elaine Fedchak, a certified information systems security professional at DACS, said the report will take about six months to produce.

  • The Federal Aviation Administration and the Defense Department collaborated on a project to identify best safety and security practices in software engineering. The results were published in 2004 in the report “Safety and Security Extensions for Integrated Capability Maturity Models” (www.faa.gov/ipg/news/finalReport.htm).

  • — John Moore
    Tools for the security-minded software developer

    Security experts place a premium on sound processes when it comes to secure software development, but automated tools can also play a role.

    One class of tools — code analyzers — can help developers program more securely. SPI Dynamics’ DevInspect, for example, automates input validation. Failure to validate input can lead to buffer overflows and other attacks. Other code analyzers include Microsoft’s Prefast, Ounce Labs’ Prexis, Parasoft products such as Jtest and C++test, and the open-source software Flawfinder. The National Institute of Standards and Technology’s Software Assurance Metrics and Tool Evaluation project maintains a list of code analyzers and other software assurance tools at samate.nist.gov/index.php/Main_Page/.

    Cloakware, meanwhile, addresses another facet of secure coding. The company provides a precompiler tool that mathematically modifies source code. When compiled, the new source code results in object code that is difficult for a potential hacker to reverse-engineer, according to the company.

    Other tools perform penetration testing as an extension of the quality assurance process, said Samir Kapuria, director of strategic consulting at Symantec. And then there are auditing tools that check for vulnerabilities such as SQL injection and cross-site scripting. Organizations can deploy those tools in a test environment or run them on applications already in production. Programmers and vendors say the use of such tools promotes programmer education.

    The U.S. Agency for International Development uses SPI Dynamics’ WebInspect auditing tool. Bill Geimer, director of information assurance at Open System Sciences and the company’s program manager on an information security contract at USAID, said the tool explains vulnerabilities in detail, making programmers aware of security issues.

    Wayne Ariola, vice president of corporate development at Parasoft, said education is a benefit of the company’s toolset. “It detects the vulnerability [and] gives you a whole documented layout of the issue,” he said.

    — John Moore

    Watch your language

    Security experts say some programming languages are more conducive to writing secure code than others are.

    In this context, the Ada language receives kudos in some development circles for promoting security. The Defense Department cultivated Ada in the late 1970s and early 1980s in a bid to reduce the number of languages used to develop custom software.

    Ada was designed with large, critical applications in mind, which contributes to its security reputation.

    “One design criterion was that it be easy to read and maintain, even if this is at the expense of ease of coding,” said Robert Dewar, president and chief executive officer of AdaCore, which specializes in Ada development environments. “So we are happy in Ada to make the programmer say more so more checking can be performed by the compiler or at runtime, allowing errors to be detected early.”

    Dewar said security concerns have led to renewed interest in Ada and SPARK, a subset of Ada. Ada and other languages that impose rigorous rules make it more difficult to accidentally introduce errors that leave an application vulnerable to buffer overflow attacks, said Nancy Mead, a senior member of the technical staff at the Computer Emergency Response Team Coordination Center. Such languages impose greater coding discipline on programmers.

    — John Moore