FEC takes preliminary step toward regulating AI in political ads
The Federal Election Commission voted to advance a petition calling for the agency to amend its rules around “fraudulent misrepresentation” to include ads that use deceptive AI.
The Federal Election Commission took its first step toward regulating generative artificial intelligence technologies by unanimously voting Thursday to advance a petition for rulemaking on the use of deceptive AI-generated content in campaign advertisements.
The petition — approved during the agency’s monthly meeting — asked the FEC to amend its current regulations on “the fraudulent misrepresentation of campaign authority” to clarify that the statute also applies to “to deliberately deceptive artificial intelligence campaign ads.”
The FEC’s action opens up the petition to a 60-day public comment period beginning sometime next week, after which the commission may or may not decide to vote on a final rule.
Thursday’s vote came after the FEC failed to advance a similar petition during the agency’s June meeting. Both petitions were filed by the consumer advocacy group Public Citizen, which has voiced concerns about the use of deepfakes and other AI-generated content in political campaigns.
“Deepfakes pose a significant threat to democracy as we know it,” Robert Weissman, Public Citizen’s president, said in a statement following the commissioners’ vote. “The FEC must use its authority to ban deepfakes or risk being complicit with an AI-driven wave of fraudulent misinformation and the destruction of basic norms of truth and falsity.”
FEC Commissioner Allen Dickerson — who voted against last month’s petition over regulatory concerns — said the revised version submitted by Public Citizen was within the agency’s jurisdiction. But Dickerson voiced lingering doubts about its intent, saying “there's absolutely nothing special about deepfakes or generative AI, the buzzwords of the day, in the context of this petition,” given the agency’s existing regulations.
“Lying about someone's private conversations, or posting a doctored document, or adding sound effects in post-production or manually airbrushing a photograph — if intended to deceive — would already violate our statute,” he added.
Political candidates and committees have already begun to experiment with using AI to boost their campaigns. The Republican National Committee released an entirely AI-generated video in April that attacked President Joe Biden following his re-election announcement, and a super PAC supporting Florida Gov. Ron DeSantis’s 2024 presidential campaign used AI to imitate former President Donald Trump’s voice in a television ad.
Democratic lawmakers, in particular, have voiced concern about how generative AI will impact campaigns and elections moving forward, with many of them calling for regulatory action amid broader bipartisan talks about establishing federal regulations around the use of generative tools and technologies.
Rep. Yvette Clarke, D-N.Y., introduced legislation in May to require political ads to include a disclaimer if they were made using AI.
In an interview with Nextgov/FCW last month, Clarke said the 2024 U.S. elections “will be the first cycle where AI-generated ads will be an integral part of how we do campaign advertising,” and that lawmakers need to act now “to erect guardrails that make it plain to the American people that these are AI-generated advertisements.”
Following the FEC’s vote in June, a bicameral group of 50 Democratic-aligned members — led by Rep. Adam Schiff, D-Calif., and Sens. Ben Ray Luján, D-N.M. and Amy Klobuchar, D-Minn. — also sent a letter to the FEC requesting that the agency “reconsider its decision.”