Case Study: Cybersecurity best practices at Defense
The U.S. military's computer systems are probed by outsiders millions of times a day, while insiders, like a soldier who allegedly extracted heaps of classified files for public consumption on the WikiLeaks website, also pose threats.
In mid-July, the Pentagon released an unprecedented cybersecurity strategy that formally branded cyberspace as a domain of warfare, akin to land, sea, air and space. But, instead of outlining offensive measures, the framework focuses on how to deter the enemy from ever attempting an attack.
As part of this plan, the military is employing "active cyber defense" -- an amalgamation of sensors, software and intelligence reports aimed at instantly blocking malicious activity.
Active cyber defense will build off existing methods of tracking vulnerabilities, according to the strategy. Perhaps an Army model under development, commonly known as continuous monitoring, will be one such building block.
Challenge
The Army requires constant visibility into the security status of all computing assets to be able to get the military the information it needs at the moment it needs it.
"The beauty of the design for continuous monitoring is you get to see, know and do," says Michael J. Jones, chief of the emerging technologies division within the Army's CIO/G6 Cyber Directorate. The "know" elements "give the commander a better understanding of which vulnerabilities are a priority." As for "do," he adds, "that's where the leaders in the Army get paid the big bucks."
Currently, the Army has scanning machinery in place to collect security stats from most information technology assets. That's the seeing part. The Army's network operations and security centers watch each technology's rate of compliance with security standards.
But center staff can't possibly tackle all abnormal findings at once and some weaknesses are less important than others. The Army needed a way to prioritize action.
Progress
Continuous monitoring is expected to deliver center commanders a means of understanding the nature of risks and who is on the hook for mitigating them, Jones says.
Every weakness identified by the surveillance equipment is given a risk score -- the higher the score, the greater the threat. This is the "knowing" part of the see, know and do.
For instance, if an IT system's antivirus program has not been updated in more than seven days, it gets a bad score. If a system does not have the proper configuration settings, a high risk score is tabulated. And if a system is missing the latest patches, or bug fixes, the risk score increases.
Last fall, the Army conducted a "know" pilot and was successful in scoring the threat intensity of more than 20,000 IT assets.
But the test revealed that the scores aren't that useful for responding -- the "doing" part -- without having someone to call to fix the problems. "One of the lessons learned from the pilot was the need to identify who, which Army organization, is responsible for ensuring the security of IT devices identified as not meeting specific compliance standards," Jones says.
He anticipates continuous monitoring to be fully deployed and operational in 2013.
Key Issues
-- Apply scanners and sensors to all IT assets to keep tabs on potential vulnerabilities.
-- Ensure the data culled by the surveillance tools feeds into a central location.
-- Develop a scoring mechanism to quantify the severity of each security risk.
-- Prioritize fixes according to risk score. Respond to the big numbers first.
-- Assign specific staff to oversee the security posture of each asset.
-- When the monitoring machinery detects trouble, managers should dispatch the group responsible for bringing the network component into compliance.
NEXT STORY: DHS pushed to tighten IT acquisition policies