How to make continuous security monitoring work
Improving cyber defenses with real-time awareness takes an investment in planning and new capabilities.
From the start, some CIOs have harbored nagging doubts about the effectiveness of the Federal Information Security Management Act. After all, does the rearview-mirror perspective on security that the now 10-year-old law requires really protect an agency from the latest security threats and future vulnerabilities?
The Office of Management and Budget and Homeland Security Department are tackling those concerns with calls for agencies to continuously monitor security-related information across the enterprise, including near-real-time oversight of hardware, software and services to uncover breaches as they’re unfolding.
Although many critics of FISMA’s old paperwork-heavy approach praise the move, continuous monitoring is proving difficult to implement, especially at large agencies with complex IT infrastructures. For example, OMB reported in March that only seven of 24 agencies are more than 90 percent compliant with FISMA and cited continuous monitoring management among the biggest problem areas.
The challenges are many, ranging from cataloging all the IT resources that must be constantly monitored for security problems to identifying the right tools and processes to quickly analyze oceans of firewall logs and other voluminous data to spot threats. For now, agency IT managers don’t have any easy, turnkey solutions to rely on to layer the additional protection of continuous monitoring practices over existing security strategies.
Why it matters
Continuous monitoring promises to elevate FISMA regulations from checkbox items on compliance audits to something that better protects agencies against a spectrum of threats, from opportunistic hackers to advanced persistent threats of highly organized or state-sponsored attackers.
Federal agencies recognize the value. The U.S. Capitol Police is two years into its continuous monitoring efforts, and so far, it’s encouraged by the results.
“We can see what’s happening on our network right now, and we can get the right people and the right tools in place at a critical event,” said Richard White, the U.S. Capitol Police's chief information security officer. “There’s not a lot of wiggle room. You have to respond fast, you have to respond accurately, and you have to be sure that once you’ve responded the incident is mitigated.”
But in a time of severe budget constraints, IT executives don’t have anything like a blank check for new investments, even for something as important as security. That’s why some early success stories are noteworthy. Federal Computer Week's sister publication Government Computer News reported that continuous monitoring at the State Department succeeded on two fronts: It lessened the highest risk threats by 90 percent in 2009 while simultaneously reducing certification and accreditation costs by 62 percent.
The fundamentals
Even with early success stories and a growing list of best practices, continuous monitoring means IT executives still require extensive preparations, new strategies for using existing IT resources and often investments in new technologies — all of which makes continuous monitoring a long-term project that requires a modular approach to implementation.
The first step is a comprehensive audit of an agency's IT environment, something that security managers said can take at least a year to complete. Agencies can automate much of the auditing with commercial tools such as Microsoft Visio and NetworkView, which scan environments for available resources.
But part of the challenge is the level of detail that’s required. When DHS started the process 18 months ago, it did more than identify all the computers, operating systems, applications, storage systems and networking gear running across the department. It drilled down to see how each agency logged IT activities, handled patch management and performed vulnerability scans.
“We were looking at 1,200 data feeds across the organization,” said Emery Csulak, deputy CISO at DHS.
The agency has since reduced the number to about 400 feeds by culling redundancies and streamlining systems.
Although time-consuming, asset audits identify what resources an agency needs to monitor, and just as importantly, they show what oversight capabilities are already in place.
“A lot of people think they have to go out and buy a specific continuous monitoring product,” said Angela Orebaugh, a fellow at Booz Allen Hamilton. “They may be unaware that many of the practices that are involved at the actual data and tool levels, including configuration management tools and vulnerability management, are already in place.”
The trick is expanding data gathering for some individual systems and departments to an enterprisewide perspective, experts say.
Next, agencies need programs to aggregate the steady flow of information and analyze it quickly enough for a rapid response. Some agencies are turning to security information and event management (SIEM) applications to collect the data feeds into a consolidated view and identify the highest risks that require the most attention.
“If I wanted to pay attention to every single alert, I would look at the raw sys logs, the firewall application logs, and data from the intrusion detection and all of the other security devices,” White said. “Having the SIEM boil up the critical events helps us avoid wasting time on low-impact events.”
Commercial tools include Hewlett-Packard ArcSight, RSA enVision and Tenable Network Security.
However, some agencies resist commercial SIEM products in favor of a dedicated security database and in-house reporting tools to slice and dice information. “We didn’t want to just buy a tool and try to jam our data into it,” Csulak said.
DHS filters the event information based on a risk-scoring system that helps identify the types of attacks that pose the biggest risks. “We look at the indicators that are the best signifiers of those threats, and then we make sure we are focusing our energy on those pieces rather than the universe of possibilities,” Csulak said.
IT executives have two models for implementing their own risk-scoring systems: the State Department’s iPost and the Continuous Asset Evaluation, Situational Awareness and Risk Scoring reference architecture developed by DHS and other agencies.
The National Institute of Standards and Technology's Security Content Automation Protocol addresses another piece of the puzzle. It offers a collection of specifications designed to make sure all the various automation tools used for continuous monitoring can work together.
“Applications that are SCAP-compliant can integrate seamlessly and use formats that are not proprietary,” said Joseph Beal, security program manager at Creative Computing Solutions, a systems integrator. “Proprietary standards make it difficult to leverage data from your [intrusion-detection system] or push the information to your ticketing system. All those systems have to be integrated.”
Finally, agencies should consider how their long-term cloud computing plans, especially for public clouds, mesh with continuous monitoring requirements.
“If you outsource your data into a contractor’s data center, you have to update the contracts with clauses that ask for asset inventories, security configuration compliance and vulnerability scanning,” said Daniel Galik, CISO at the Health and Human Services Department.
The General Services Administration is working with other agencies to develop continuous monitoring provisions for the Federal Risk and Authorization Management Program, the government’s standardized approach to cloud security assessments, authorizations and continuous monitoring. Earlier this year, a GSA official said as many as nine controls for continuous monitoring will be part of FedRAMP by June.
The hurdles
But as the March report from OMB showed, agencies are struggling with continuous monitoring, even as tools and best practices continue to evolve.
One challenge is that tools such as SIEM require fine-tuning. “Deploying a new technology and expecting it to just work probably will make you less secure because you’ll have a false sense of security,” White said.
DHS worked for more than a month to calibrate its SIEM solution to understand how the organization classified low-, medium- and high-risk events. The calibrations are ongoing.
“It’s a continuous effort to train the technology so it gives you the types of alerts that you are looking for,” he said.
Money is another hurdle. Security executives keep costs as low as possible by using existing equipment whenever possible to collect data. Any gaps in information gathering, analysis and reporting must be filled with new investments. In the past, some agencies, including HHS, used funds available from the American Recovery and Reinvestment Act to help pay for continuous monitoring products, Galik said.
Financial considerations encourage some executives to become philosophical about investments in continuous monitoring. “You will have to spend some money, but it doesn’t take a very large security breach to make a CIO or CISO wish that their money had been better spent,” White said.
Next steps: Sizing up solutions
When evaluating commercial products for automating continuous monitoring activities, IT managers should ask detailed questions, agency chief information security officers and consultants said. Here’s a checklist of some key areas to cover when shopping for security event management, vulnerability scanning and similar products.
- What types of log files does the solution collect?
- Does it scan devices outside the Microsoft Windows environment, including Linux and Unix systems and network devices?
- What format is used to report the data that’s been collected?
- Is the solution compliant with the National Institute of Standards and Technology's Security Content Automation Protocol?
- If not, what other vendors successfully integrate with the solution?
- What in-house expertise will be needed to manage and use the solution effectively?