FCC commissioner warns of ‘heavy handed’ AI regulation in political ads

FCC Commissioner Brendan Carr speaks at the Conservative Political Action Conference in National Harbor, Maryland in February 2024.

FCC Commissioner Brendan Carr speaks at the Conservative Political Action Conference in National Harbor, Maryland in February 2024. Celal Gunes/Anadolu via Getty Images

As the Federal Communications Commissions weighs the proposed rule to bring a disclosure regime to political ads using AI, Republican FCC Commissioner Brendan Carr highlighted technical nuances and public confusion as two potential roadblocks.

Federal Communications Commissioner Brendan Carr worries that, as his agency considers a proposed rule to establish regulations surrounding AI-generated content, the wrong approach could overtake innovation in the burgeoning field. 

Proposed by Chairwoman Jessica Rosenworcel this summer, the rule would impose disclosure requirements if AI was used to create political content at both the federal and state level.

Carr, one of two Republican picks at the five-member FCC, is voting against the proposal.

Describing AI regulation as a potential generational issue, Carr said during a Monday forum hosted by the Federalist Society that his approach mirrors broader U.S. regulatory goals: balancing innovation with guardrails.

“I don't think that we should go with a total sort of fundamentalist libertarian approach to the regulation of AI, but at the same time, I think we can go way too far in terms of heavy-handed regulation before we've seen how the technology can play out,” he said. “That heavy-handed regulation could either: tilt the playing field in favor of…established incumbents or the most established in an emerging field, which I think would be detrimental in the long run, or otherwise sort of suppress new and beneficial innovations.”

One key caveat Carr raised was the distinctions in AI technologies themselves. He said the differences between using generative AI to create scripts versus using AI software to augment visual content, such as pictures and video deep fakes, gives him pause in developing broad rulemaking for a nuanced field.

He also referenced partisan concerns surrounding individual parties’ usage of AI, statutory authority overreach and confusion among the general public when interacting with disclosure regimes. 

“Even if the FCC were successful, you would now have this disclosure regime on ads that run on broadcast TV, but you'd have the exact same ad or a similar ad running on the internet, and it wouldn't have the disclosure because the FCC doesn't regulate there, and what kind of impact does that have on on viewers, on voters? Do they assume the one without the disclosure must be real and not AI? I think they're sort of ripe for confusion there,” he said. 

The proposed rule issued by the agency comes ahead of the 2024 presidential election, and mounting concerns of AI content spreading misinformation and disinformation, resulting in multiple companies pledging to monitor content on their individual platforms. 

The FCC has previously voted to administer new protections –– and outright ban AI generated messages in robocalls –– that would alert consumers when they are encountering AI-enabled content. The commission is still considering how best to moderate content generated with AI technology in relation to its applicable authorities.