New AI tools spawn fears of greater 2024 election threats, survey finds
State and local officials are increasingly concerned about the possibility for AI tools to spread disinformation and power phishing attacks during the 2024 election cycle.
State and local officials are concerned that artificial intelligence tools and other emerging technologies will supercharge threats to the 2024 U.S. elections, according to the results of an election cybersecurity survey released by cyber firm Arctic Wolf on Tuesday.
The report surveyed more than 130 state and local government leaders — including officials responsible for IT systems and cybersecurity — to gauge their views on election security and found that “more than half of respondents reported they are not at all prepared or somewhat prepared to detect and recover from election-targeted cyber incidents.”
A plurality of officials expressed particular concern that the election cyber threat landscape in 2024 will be even worse than during the 2020 election cycle, with respondents saying that resourcing and staffing limitations are exacerbated by the risks posed by AI tools.
“Adding to the feelings of unpreparedness is that election officials and administrators are expecting a significant uptick in the volume of attacks compared to what they saw in 2020, with almost half (47.1%) expecting an increase, while less than 3% (2.9%) believe they will see a decrease,” the survey said.
Adam Marrè, chief information security officer at Arctic Wolf, told Nextgov/FCW that the results of the survey underscored the fact that emerging technologies pose unfamiliar challenges for election officials.
“We can point to the rise of advanced AI tools and the feeling of underpreparedness to the growing concerns of phishing attacks and the spread of disinformation,” Marrè said. “The novelty of these AI tools and their potential to impact elections is something many respondents may not be prepared to face.”
When it came to the types of cyber threats that election officials are most concerned about, the survey found that 50.7% of respondents cited disinformation campaigns as their top worry, followed by 47.1% who cited phishing attacks targeting election workers and 45.6% who voiced concern about hacking attempts on election systems.
“AI algorithms can learn to mimic human behavior on social platforms and be nearly indiscernible from the real thing to an untrained eye,” Marrè said. “Even deeper, these AI-crafted falsehoods can target specific demographics to influence public opinion and potentially sway election results.”
He added that the growth of AI tools has also “amplified the capabilities of phishing tactics for cyber criminals,” with nefarious actors now more easily able to “create seemingly perfect phishing emails targeted to an individual, increasing the likelihood of a successful breach.”
Federal and state officials have expressed similar concerns about how foreign and domestic adversaries could use emerging technologies — including generative AI — to weaponize election disinformation and potentially fuel doubts about the validity of future voting results.
In a Jan. 3 Foreign Affairs article, Cybersecurity and Infrastructure Security Agency Director Jen Easterly, Kansas Secretary of State Scott Schwab and CISA Senior Election Security Adviser Cait Conley wrote that “generative AI is complicating the jobs of election offices at a time when many of them remain underresourced and understaffed.”
“With generative AI, the United States can expect an increase in the scale and sophistication of these efforts in a wide variety of tactics: misleading voters about candidates by using false messaging or altered images; targeted voter suppression campaigns using generative AI to impersonate election officials to spread incorrect information about voting center locations and hours of operation; and deepfake images or videos of election workers casting and counting fake ballots, to name just a few,” the officials wrote.