Americans want AI disclaimers, FEC commissioner says
Federal Election Commission’s Shana Broussard’s insights follow the agency’s recent decision surrounding generative AI in political ads.
Ahead of the 2024 U.S. presidential election, Commissioner Shana Broussard of the Federal Election Commission said that the rulemaking process related to artificial intelligence in political advertisements revealed the popularity of disclaimers.
Broussard’s comments follow the FEC’s decision released last month that reiterated existing regulations in the Federal Election Campaign Act expand to AI-generated content, following a petition from nonprofit group Public Citizen asking for an outright ban on using deepfakes in political ads.
Although the FEC ultimately did not deliver a rule that outlawed using deepfakes, Broussard said during the GovAI Summit in Virginia on Tuesday that the “thousands” of public comments indicated that Americans want disclaimers on synthetic content.
“I think that the greatest discovery that we made, at least, is that people are really looking forward to disclaimers,” she said. “Something that gives information, in fact, that this is an AI generated communication.”
Broussard added that the FEC has to continue to balance First Amendment rights to freedom of speech with ensuring that federal elections are conducted fairly in the age of AI. Although she noted the agency can’t require a disclaimer at this stage, Broussard said there is “promising” action in Congress on the legislative front.
“From the standpoint of what the petitioners really want and what the public really wants, [it] is knowledge,” she said.
The FEC’s decision did find that the agency’s current statutes are technology neutral –– meaning that they are malleable enough to apply to newer technologies –– but that a mechanism to add an AI disclaimer to ads can only come from passed legislation.
“We promulgate regulations based upon the statutes that Congress produces,” she said. “And if they do that, then I can act.”
The explosion of advanced generative AI content has long been a concern ahead of election seasons around the world for its potential to influence election outcomes. Leading tech companies like Meta, Google, and OpenAI agreed to take voluntary steps earlier this year to mitigate synthetic political content.
NEXT STORY: Making AI work inside and out at SSA