AI escalates election cyber threats with the US as prime target, reports find
Election workers face increased concerns due to the potential for AI systems to help enable disinformation. Outside cyber threats aren’t helping, either.
A pair of reports from a private sector cyber intelligence firm and a prominent national security think tank are warning the U.S. is a top candidate facing election interference threats at a time when generative artificial intelligence technologies and manipulative AI-generated content are poised to upend the November presidential election cycle.
Amid some 60 nations holding major nationwide elections in 2024, the U.S. is the top country expected to face the highest level of cyber-based interference threats from nation-state adversaries like Russia and China, enterprise threat intelligence firm Tidal Cyber determined in an analysis released Thursday.
An election security whitepaper released the same day from the German Marshall Fund’s Alliance for Securing Democracy also warned that election officials — who are already under increased pressure due to declining public trust in the integrity of elections and escalating threats linked to false claims about the 2020 election results — face a set of new threats resulting from advancements in AI technologies.
The reports come a day after DHS’s Cybersecurity and Infrastructure Security Agency launched a webpage providing election security resources to state and local governments, as well as election workers, amid heightened fears of whether cybercriminals and nation-state governments may seek to interfere in the 2024 election process through hacking or disinformation attempts.
Tidal has observed multiple hacking operations conducting election interference activity, mainly Russian and China-backed campaigns, as well as groups linked to Iran and North Korea.
The Russia-linked APT28 and APT29 entities took the top of the list for the number of observed countries they targeted, with a total of 35 nations among the two collectives. The latter campaign, sometimes called Cozy Bear by researchers, has been attributed to the hack of the Democratic National Committee ahead of the 2016 U.S. elections and recently reported breaches of HP and Microsoft, as well as the 2020 Solar Winds hack.
“At issue are not only the risks posed by AI-enabled information manipulation, but equally importantly, the perception hacking that AI can facilitate,” the GMF whitepaper noted.
To prepare for these threats, the think tank advised officials to consider AI risks when training new staff or planning their election operations, strengthen their cybersecurity practices and — when appropriate — use AI tools and other emerging technologies to counter deepfakes or streamline more mundane election-related tasks.
Hackers seeking to harm election processes have mainly targeted campaign staff through emails, as well as identity-based attacks that abuse password recovery features to break into political media accounts or related targets, according to Tidal’s analysis. They have also attempted to disable election-facing sites that display voter information or turnout results.
More recent cyberattacks have involved ransomware. But Tidal’s report also highlighted the dangers of insider threats tied to ongoing lawsuits that stem from conspiracy theories by former President Donald Trump and his allies about the results of the 2020 election.
Beyond emphasizing the threats posed by AI technologies, GMF also highlighted resources available from federal agencies — including CISA, the Election Assistance Commission and the National Institute of Standards of Technology — detailing how officials can manage and mitigate the risks posed by AI.
The GMF document also recommended that officials consider piloting content authenticity technologies that “show how an image was created and how it has been altered over time” to slow the spread of deepfakes and other manipulated materials. Election administrators should simulate AI threats, including by “conducting mock elections and tabletop exercises” with staff to identify vulnerabilities in their election systems and “to test resilience against AI-driven phishing campaigns and disinformation,” according to the document.
Participants in the electoral arena are grappling with a plethora of anxieties this year. A deepfaked audio phone call of President Joe Biden that’s under investigation has been deemed just the tip of the iceberg for potential disinformation in November and beyond. Election workers worry they face threats of violence from voters who don't accept the polling results.
Election administrators have told lawmakers repeatedly that more federal assistance is needed to safeguard voting systems and personnel from both cyber and physical threats, although officials also say that any new funding for elections won’t have much use in 2024, assuming any such measure passes.
Former National Security Agency and Cyber Command leader Gen. Paul Nakasone recently told Congress that the November election would be the safest yet, adding he has not seen efforts to disrupt and interfere in the election process.
CISA Director Jen Easterly in that same hearing later said that the U.S. should “absolutely expect” foreign actors will attempt to influence elections but stressed that Americans should be confident in election infrastructure.
CISA, EAC, NIST and the Federal Election Commission did not return requests for comment.