IARPA aims to thwart cyberattacks with psychology
The intelligence research agency is looking to deploy and automate hackers' cognitive biases to help defend potential cyberattacks.
The Intelligence Community's leading research agency is exploring how to develop new algorithms that thwart cyberattacks by psyching out the attackers themselves.
The Intelligence Advanced Research Projects Activity is planning to host an event in San Diego next month to gain insights on how to design methods that would defeat cyber attackers by turning their "innate decision-making biases and cognitive vulnerabilities" against them.
IARPA's Reimagining Security with Cyberpsychology-Informed Network Defenses, or ReSCIND, program will hold a proposers' day on Feb. 28 where registrants will participate in a series of five-minute lightning talks discussing how human psychological limitations can be identified, measured and influenced and ultimately automated to counter cyberattack behavior.
Cyberpsychology has become a developing study of human interactions with internet-connected devices, frequently focusing on areas where web-based tools have the potential to impact mental health, such as social media, or also influence decision-making, such as e-commerce.
But understanding user behavior as it relates to cybersecurity has also become a growing field of research, specifically around how decision-making abilities can be exploited by malicious actors.
Psychologists are studying how adversaries leverage cognitive biases in areas like social engineered cyberattacks, disinformation campaigns and other tactics, but also how cybersecurity can also weaponize the attackers' biases.
IARPA issued a cyberpsychology request for information in September 2022, noting that cognitive effects "relevant to cyber attackers have begun to be hypothesized, but only a few have been validated," when it relates to cybersecurity.
"Recent experiments demonstrating the power of framing effects were investigated, indicating that attackers who were provided information that deceptive technology was present on a network had less forward progress," the RFI said. "Additional work has examined the effect factors like uncertainty have when interacting with other cognitive effects."
According to a 2019 paper by researchers at Arizona State University, the Laboratory for Advanced Cybersecurity Research and the Naval Information Warfare Center, one way to achieve this goal is by deploying a strategy known as Oppositional Human Factors.
"OHF is based on flipping recommendations and techniques that normally improve behavior or usability, in order to disrupt attacker cognition," the paper said. "The attack surface which defenders must protect is growing untenably vast. Attackers are very persistent, and in cyber defense, any realized reduction in threat (including delay) is a success."
IARPA hopes to gain more insight into how such methods could be deployed with its proposer's day. The hybrid event will include both in-person and online participants with a maximum of 30 lighting presentations.
Interested participants must register by noon PST on Feb. 13.