Is AI the missing piece for government agencies to achieve zero trust security?
Here are three factors to consider.
For the last year or two artificial intelligence AI has been the most talked about topic across sectors, including Government agencies. While there is broad recognition of its immense promise there is equally as much conversation about its implications. From a cybersecurity perspective a central emerging question has been is AI the long-awaited breakthrough that can finally overcome the obstacles to widespread adoption of zero trust security? While the straightforward response may be affirmative, the comprehensive answer is more nuanced.
Trusting nothing and no one, and continuously verifying the identity of every entity seeking access to network resources, is the fundamental principle of zero trust security. Despite being a concept that has been around for over two decades, its implementation has been hindered by a seemingly endless array of challenges. This is particularly true for government agencies, where complexity in technology and infrastructure is the norm, and factors like compliance and the potential consequences of a cybersecurity breach are different from those in the private sector. However, the recent emergence of AI presents an alluring prospect that the technical barriers may finally be surmountable.
As government agencies consider AI in the context of zero trust, here are three main considerations of note:
Human Factor Remains Critical
It is essential to keep in mind that at its core zero trust is not a technology issue and therefore can not be solved by technology alone. Looking past the technical aspects and engineering involved, the successful adoption of AI models to support zero trust and broader cybersecurity fundamentals necessitates a focus on various human-related factors. It demands substantial organizational change management, as it fundamentally alters the way individuals carry out their daily responsibilities and how agencies operate. Viewed another way, without focus on the human factor, there can be no successful digital factor.
The Balancing Act of Risks and Benefits
It is undeniable that AI possesses unparalleled capabilities to seamlessly adapt to dynamic environments and unforeseen situations. The sheer volume of instantaneous decisions required for a zero trust environment to operate as intended would be unattainable without AI. However, government agency leaders and operators must acknowledge and be cognizant of the fact that the utilization of AI carries an equal number of risks and benefits. Can important security decisions be delegated to an algorithm when one may not totally understand how it makes such decisions? In short, trusted AI capabilities have become imperative in mitigating risks, including potential privacy infringements.
Heightened Sophistication
Another critical risk of course to be mindful of is the ability of the technology to be used to breach systems through impersonation or by additional sophisticated discovery and exploitation of system weaknesses. This is why education of the modern government workforce and upskilling of talent is critical to ensure employees are capable of identifying possible issues.
The Bottom Line
While risks must be accounted for, AI and its capabilities are ultimately more friend than foe. As government agencies increasingly turn to AI, when it comes to zero trust security, careful consideration of the factors discussed is vital to ensure successful implementation. By prioritizing robust training, human oversight, and transparency, agencies can harness the full potential of AI while maintaining the integrity and security of their systems. With the right approach, AI can become a powerful tool in the government's arsenal to combat cyber threats and enforce zero-trust security effectively.
Tony Hubbard is the principal, government cybersecurity leader at KPMG LLP.