Piecing together the insider threat puzzle
Software and automation are crucial to preventing leaks and sabotage, but they're not the only tools deployed by agencies that collect and guard secret information.
The federal government is showing mixed results on its goal of implementing policy to reduce data leaks and other risks posed by government insiders and contractors holding security clearances, according to the most recent updates posted on Performance.gov.
The leaks from former National Security Agency contractor Edward Snowden and the Navy Yard shooting drew attention to efforts to improve and automate processes by which clearances are granted and how access to classified information and networks is monitored. But plans to mitigate risks posed by trusted insiders go back to a 2011 executive order and a subsequent 2012 policy directive setting minimum standards for insider threat programs.
A big piece of the effort is developing systems for continuously evaluating the activities and risks posed by the population of 5.1 million Americans with access to classified information and working on IT strategies to support acquisition and development.
The Office of the Director of National Intelligence missed a December 2014 goal to roll out an initial continuous evaluation capability for individuals with the most sensitive clearances, according to the latest update on Performance.gov. A goal to develop an enterprise IT strategy for security, suitability and credentialing by the end of last year was also missed.
There are bureaucratic and practical obstacles to checking all the boxes on the performance goal sheet. To cite just one example, the process to add mental health questions to the information collected on standard personnel security and suitability forms is taking longer than planned. But it's clear that the overall objective of preventing leaks and sabotage is a high priority for institutions that collect and guard secret information.
The Defense Department issued a directive establishing an insider threat program in September 2014. Officials plan to launch an "integrated capability to monitor and audit information for insider threat detection and mitigation." DOD reports that it is on track to have a continuous evaluation capability extended to 225,000 personnel by December.
As it stands, according to Performance.gov, the government is scheduled to deploy an initial governmentwide insider threat program by the end of the year, with final operating capacity achieved by the end of 2016.
The ultimate goal, which is partially complete, is to have regular -- ideally electronic -- access to insider threat data from a range of sources, including counterintelligence, law enforcement, human resources departments and IT access logs. The monitoring of cleared users covers unclassified and classified agency networks and includes mobile devices. Plans also include a hub or portal to fuse and analyze information.
The risks of automation
Designing an insider threat program involves more than developing some rules-based monitoring and flipping a switch.
Picking an automated solution is way down the list, said Mike Theis, a former counterintelligence agent who now leads research on insider threat detection for the CERT Insider Threat Center at Carnegie Mellon University.
Speaking at AFCEA DC's April 2 Cybersecurity Technology Summit, Theis said program design begins with prioritizing the critical assets and understanding the best places to observe possible threats to those assets. That might mean putting monitoring tools on IT networks, but it could also mean relying on data from access badges and camera feeds.
There are risks to relying on automation to ferret out insider threats.
"The unintended consequence that I often worry about as we add more and more technological measures for monitoring data and data access is that people will start to think it's not their job anymore to stop anomalies," said Neal Ziring, technical director at NSA's Information Assurance Directorate.
There is also the risk of tuning threat detection to be overly sensitive to single anomalies. Ziring said it was important to be "careful with triggering off of single events." Like any intelligence work, insider threat detection requires a mosaic approach, where data is corroborated and fused to shape a larger picture that takes into account employee activity on IT networks and around classified information, financial pressures or sudden unexplained changes in income, among other factors.
It's important to view a range of activities, including sabotage, as insider threat risks. "We always talk about exfiltration as the threat, but there's also infiltration," said Air Force Chief Technology Officer Frank Konieczny.
Division of labor
One specific outgrowth of the Snowden leaks was the recommendation by President Barack Obama's Review Group on Intelligence and Communications Technologies to create an "administrative access" clearance to prevent systems administrators and network operators (like Snowden) from gaining access to the substance of intelligence work.
One solution to that challenge is a highly differentiated division of labor, Ziring said.
"You want to apply classic information security principles," he said, "so your administrators are not the same group necessarily as your cyber defenders, though they have to work together, who aren't the same group as your insider threat mitigators, though they have to work together."
This approach also builds in a needed check against potential abuses of insider threat programs, Theis said. The individuals who analyze insider threat data shouldn't be able to modify data-collection policies or targets, and individuals who set the policies and targets shouldn't see the data.
"You should not be able to say, 'Now I want to deploy the tool against the director and read his email or her email,'" he said.