Use AI to fight AI-generated election threats, report recommends
Hyperlocal voter suppression, language-based influence operations and deepfakes were listed as the most likely AI-generated threats to upcoming elections.
Although hostile nation states and other nefarious actors are likely to use artificial intelligence to spread misinformation ahead of November’s elections, U.S. voting officials, companies and other groups can combat these lies by elevating factual information and even employing AI capabilities of their own, according to a series of mitigation strategies released Tuesday by the Aspen Institute’s Aspen Digital program.
The organization identified three specific AI-powered threats that it said bad actors are likely to use this election season: hyperlocal voter suppression, language-based influence operations and deepfakes.
To combat this possibility, Aspen Digital released three checklists that public and private groups can use to help voters better understand these threats and turn to trusted sources of information.
When it comes to localized misinformation, the document said that “AI tools can be used to create highly personalized and interactive content that misleads people about conditions at voting sites, voting rules or whether voting is worthwhile.”
To respond to this threat, Aspen Digital said, in part, that social media platforms could employ emerging technologies to identify false election information, including through the use of “AI tools to monitor content-level narratives to identify trends and detect malign influence actors that may vary language to avoid detection.”
To combat language-based influence operations powered by AI — which the document said “reduces the effort needed for bad actors to create malicious content in any language by automating translation” — Aspen Digital also said that election officials could utilize similar capabilities of their own.
“Translate first-party resources and consider responsible ways to use AI tools to translate social media and other official communications,” the report suggested for election administrators. “If using AI translation tools, perform quality checks, notify readers when automated translation has been used and link to an authoritative resource, where applicable.”
False or manipulated audio, video or images — known as deepfakes — were also identified as likely threats to the upcoming elections. The risk of this AI-generated content being used to influence elections was already demonstrated earlier this year after a robocall impersonating President Joe Biden went out to voters ahead of New Hampshire’s presidential primary in January.
Aspen Digital said election officials and other organizations should develop verification and mitigation protocols now to identify this type of generated content and work to inform communities about the likelihood of this threat.
In addition, the report said social media platforms should monitor content on “niche forums” to identify deepfakes that might spread to their sites and “consider using AI-enabled, narrative-level summation tools to monitor real time narrative trends.”
Federal officials have warned about the risk AI-generated content poses to U.S. elections, although they have also stressed that these emerging capabilities are more likely to exacerbate existing cyber and misinformation threats than pose new concerns.