3 Ways Agencies Can Improve Cloud Security and Performance
It’s imperative agencies ensure their websites and hosted applications are secure and working at the expected performance levels.
The Federal Cloud Computing Strategy makes it clear government agencies have significant responsibilities for protecting cloud-hosted data—but the guidelines mapped out in the FCCS may not go far enough.
The FCCS states the need for agencies to develop their own governance models and create service level agreements to ensure they’ll have continuous access to log data and prompt notification from their cloud service provider in the case of a breach. Indeed, the onus is on government agencies to comply with regulations and do everything they can to protect “the confidentiality, integrity, and availability of Federal information as it traverses networks and resources” in the cloud.
This is further amplified by shared responsibility models clearly stating security in the cloud—including application-level security—is the responsibility of the customer, not the cloud provider. Therefore, it’s imperative agencies go even further to ensure their websites and hosted applications are secure and working at the expected performance levels. Here are three best practices agencies can follow to improve the performance and security of cloud-hosted applications and web assets.
1. Get to the root of application performance problems (and understand when these problems indicate a security incident).
Monitoring application performance in the cloud is critical to ensuring any issues impacting the availability of key systems are addressed before they impact users. For example, monitoring network traffic flow in real-time can reveal heavy bandwidth consumption by apps and users, but it can also reveal malicious or malformed traffic flows.
Having security information event management (SIEM) capabilities is essentially like having another pair of eyes. By gathering logs from cloud databases, apps and websites, a SIEM watches 24/7 for suspicious activity and compliance issues. Teams can quickly cut through the clutter and narrow in on vulnerabilities and potential threats and prioritize where to focus their limited resources first.
But these capabilities can only go so far. Security teams must have the right incident response processes in place. They must be able to react quickly and respond to threats at scale. The ability to prioritize threats based on severity and easily communicate with other team members are keys to acting quickly when an incident occurs.
It’s also important to educate administrators about the ties between application performance and security. IT professionals without a security background may not know an indicator of application performance issues could also suggest a cybersecurity risk. Agencies must educate their IT teams on what to look for and how to act.
Training shouldn’t be solely relegated to the IT team. The prevalence of insider threats means all employees should be trained on the agency’s policies and procedures and encouraged to follow best practices to mitigate potential threats.
2. Build with security in mind.
It’s not just users IT needs to worry about. Application availability could also be indicative of a security incident such as a distributed denial-of-service attack. By monitoring logs and network traffic, IT pros can flag potential security incidents needing to be addressed before the user experience or critical systems are impacted.
However, there’s no route to 100% breach prevention. Threat actors are stealthy and their presence within the network may go unnoticed. To get a proactive handle on application security, the cybersecurity function must work closely with developers to build security into code during the development process. They must also perform vulnerability scanning on apps to ensure they are free of flaws before they’re deployed.
3. Derive meaningful security insights from log monitoring data and machine learning.
Log monitoring data is rich with insights, yet performing any kind of analysis on fragmented event logs from multiple sources can be time-consuming. It’s also hard to achieve any kind of context between event logs, so threats can be pinpointed quickly and mitigated promptly.
But when machine learning and behavioral analysis are applied to log data, new insights are revealed. Security teams can expose patterns and indicators of malware activity in the cloud environment.
They can also look for anomalies in user behavior within the cloud, such as a person authenticating from unexpected locations. If an agency operates exclusively in one part of the country, yet log data suggests an employee has logged in using the same credentials from another part of the world, such as Russia or Iran, it could be a sign of compromised credentials and malicious intent. Using this insight, administrators can move to change those credentials and investigate the incident further.
When it comes to securing digital assets in the cloud, federal agencies have made a great deal of progress, but there’s still room for improvement. By implementing these three best practices, federal agencies can build on what they’ve already accomplished and augment the requirements of the FCCS to further develop their security practices.
Brandon Shopp is vice president of product at SolarWinds.