Deepfakes will be a ‘big problem’ for the 2024 election, officials say
Leaders from the Federal Election Commission and the National Intelligence Council noted that bad actors’ capabilities are increasing, while the government’s policing capabilities remain limited.
Synthetic content created by artificial intelligence systems poses a threat to the myriad 2024 elections both within the U.S. and abroad, federal officials recently warned.
Daniel Breedlove, the Deputy National Intelligence Officer for Emerging & Disruptive Technologies at the National Intelligence Council, described what the intelligence community is witnessing on the proliferation of realistic synthetic content during a discussion hosted by Foreign Policy on Thursday.
Breedlove underscored the ODNI’s inclusion of AI-generated deepfakes and misinformation as transnational threat issues, citing statistics that speak to a tenfold growth in the number of deepfakes detected globally across various industries between 2022 to 2023, with North America seeing the largest increase with a 1740% deepfake surge.
“I think [it] confirms a trend that misinformation, disinformation and deepfakes are becoming a big problem,” he said.
Breedlove highlighted two environments where this synthetic content is likely to proliferate in 2024: elections and wartime environments.
“The ability to rapidly produce realistic content in multiple languages is expected to increase foreign actors’ propaganda capabilities, especially in 2024,” he said.
Domestically, there are limits to policing lifelike content manufactured by AI. Dara Lindenbaum, a commissioner for the Federal Election Commission of the United States, spoke candidly about her agency’s restricted enforcement abilities in the emerging realm of deepfakes during a separate panel event hosted by the Aspen Institute and Columbia University on Thursday.
“The short answer is that the FEC is fairly limited in what it can do in this space,” she said. “Even if we can regulate here, it's really only a candidate-on-candidate bad action.”
Lindenbaum said that congressional action would dictate a potentially expanded role for the FEC in policing deepfakes and synthetic political content.
“Congress could expand our limited jurisdiction,” she said. “If you ask me, three, four years ago, if there was any chance Congress would regulate in the campaign space and really come to a bipartisan agreement, I would have laughed. But it's pretty incredible to watch the widespread fear over what can happen here.”
Following a 2023 approved rulemaking petition on expanding its fraudulent misrepresentation statutes to apply toward AI campaign content and advertisements, the FEC received “thousands” of comments from various stakeholders to determine if the agency can amend regulations.
“We are in the petition process right now to determine if we should amend our regulations, if we can amend our regulations,” she said. “It is my hope that Congress and states and others looking at this will read all of these comments as they try to come up with possible creative solutions here.”
While Lindenbaum expressed doubt that any statutory changes will be made by Congress regarding the FEC’s enforcement authorities prior to the November 2024 elections, she said that she sees “changes coming” as lawmakers like Sens. Amy Klobuchar, D-Minn., and Mark Warner, D-Va., are helming legislative efforts to police deepfakes.
“This discussion of AI and how AI is…at the forefront of everything that we're discussing in this country, I think it has brought more to light, this misinformation, disinformation and the ways that the information gets disseminated, that it is bringing that discussion out,” Lindenbaum said. “So things could change. I'm hopeful.”