FCC to consider disclosure requirement for AI-generated content in political ads
AI-made materials have already seeped into election campaigns around the world.
The Federal Communications Commission said it will consider a proposal requiring disclosure of AI-generated materials on radio or TV-based political advertisements.
Chairwoman Jessica Rosenworcel circulated the measure with her colleagues on Wednesday. If adopted, it would seek feedback on how to require on-air and written disclosure of AI-generated materials and how broadcasters, cable operators and other programming entities should comply with the rule.
Officials, lawmakers and researchers have repeatedly expressed concerns over how AI tools may supercharge election misinformation or further enable disinformation campaigns aimed at stifling electoral processes leading into November.
There’s a “clear public interest obligation for commission licensees, regulatees, and permittees to protect the public from false, misleading, or deceptive programming and to promote an informed public — and the proposed rules seek to achieve that goal,” Rosenworcel's office said in the announcement of the proposal.
Tech giants including Meta and Google have required similar disclosures on their platforms. A voluntary commitment taken up by some 20 tech firms in February would also see companies collaborating to create watermarking and detection tools to identify deceptive AI content. Such machine-generated materials have already worked their way into elections around the world.
The proposed measure, which the agency says is supported by agency authority conferred in the 2002 Bipartisan Campaign Reform Act, would be a first-of-its-kind mandate obligating the U.S. broadcast landscape to inform the public about AI content lodged in political ads that are heard on the radio or seen at home.
AI-generated content has already seeped into the 2024 U.S. election process. Ahead of New Hampshire’s presidential primary in January, residents received a robocall with an AI-generated voice of President Joe Biden that told voters not to go to the polls. Following outcry, the FCC issued a unanimous ruling in February deeming it illegal for robocalls to contain AI-generated voices, specifically.
Foreign adversaries have already been found deploying fake social media personas that have engaged with or provoked real-life users in an attempt to assess U.S. domestic issues and learn what political themes divide voters.
The National Association of Broadcasters, a trade group representing radio, TV and broadcast stations, did not immediately respond to a request for comment.
Top cyber and intelligence officials told a Senate panel last week that the U.S. is prepared to handle election interference threats later this year, but stressed that AI-made content will further challenge authorities in their ability to verify sham information.
Bipartisan legislation introduced earlier this month would push the nation’s top election administration agency to develop voluntary guidelines that address the potential impact of AI technologies on voting.
Rep. Yvette Clarke, D-N.Y. unveiled a bill last year that would mandate political ad disclosure requirements similar to the FCC proposal.
"The 2024 election is around the corner, and due to a lack of congressional action, there has been little to stem the tide of deceptive AI-generated content that could potentially mislead the American people. Thankfully, the Biden Administration is taking proactive measures to protect our democracy from the dangers posed by bad actors," Clarke said in a statement.
Nextgov/FCW Staff Reporter Edward Graham contributed to this report.
Editor's note: This article was updated May 22 with additional comment.
NEXT STORY: NIST unveils strategic vision for AI safety work