Some banks moving too slow to address AI-powered cyberthreats, Treasury says
The agency’s findings will be distributed to Capitol Hill with the hope of drumming up legislation or other initiatives to study the risks.
Some financial institutions are not moving fast enough to adopt adequate risk management frameworks that would help address AI-driven cybersecurity threats, according to a report out Wednesday from the Treasury Department.
Cybercriminals that deploy hacking techniques driven by tools like generative AI chatbots are likely to have a leg up against banks, at least in the short term, adds the analysis, which cites 42 interviews with finance firms, trade associations and cybersecurity service providers.
The Treasury inquiry was convened as part of a sweeping AI executive order in which the federal landscape was directed to study and reorient its operations around the fast-evolving technology that’s made headlines over the past year for its rapid adoption into consumer-facing markets.
The order identified financial services — alongside education, housing, law, healthcare and transportation — as industries that could suffer from the misuse of a given AI technology. Several federal agencies were asked to author sector-specific reports evaluating the risks and benefits of AI with varying due dates following the directive’s signing.
AI chatbots and related tools, like OpenAI’s ChatGPT or Google’s Gemini, have been hailed as powerful productivity enhancements, but tools also can augment hackers’ abilities to carry out cyberattacks and increasingly credible social engineering scams.
Those have included phishing campaigns, where hackers build a cloned email, webpage or other common digital item with an underlying program that siphons victims’ data or plants malware on their systems if clicked on, according to a Treasury official who spoke to reporters ahead of the release.
Email phishing attempts against monetary entities have started to look more realistic, said the official, recounting conversations the agency had with institutions. Historically, many phishing emails contained language signaling to targets that the hacker doesn’t speak perfect English, but AI systems have made these attempts more precise, the official said.
Scammers have also tried voice cloning technologies to impersonate victims and gain access to their financial information, the official added.
Such AI tools are also being used for malware and code generation. One example described involved using a generative AI platform to build a fake copy of a firm’s website to harvest customer credentials. Hackers have also weighed using such tools to scan websites for vulnerabilities, the report says.
While institutions have been sharing anonymized threat information with cyber vendors more frequently, financial firms have been less willing to share fraud protection information with one another, the analysis found, adding the absence of fraud data sharing “likely affects smaller institutions more significantly than larger institutions.”
The report is being widely distributed to Capitol Hill offices Wednesday, with the hope of getting lawmakers on board with its findings, the official later added.
Other data-specific risks the Treasury report highlights include data poisoning, data leakage and data integrity attacks, all of which target the sensitive information used to train AI models themselves. By compromising the foundational information within source data, attackers can permanently alter a large language model’s output, leading to an AI system producing biased, unethical or false answers in response to a given prompt.
While foundational training data is a prime target for hackers, all data handled throughout an AI system’s development and production cycle demands protocols that would protect it from cybercriminal access, Treasury advised.
Financial regulators have been frequently sounding the alarm on AI systems and their integration into investment services. Securities and Exchange Commission Chairman Gary Gensler has said that unchecked AI systems could cause financial collapse in the future.
The SEC issued a rule that requires publicly traded firms to disclose hacking incidents that could materially affect their investors. It aims to bring more transparency about how cyberattacks impact companies’ bottom lines by forcing them to report breaches within four days.