FCC to consider new protections against AI-generated robocalls next month
The proposal comes four months before the U.S. presidential election, where experts have long feared how AI may be used to misinform voters.
The Federal Communications Commission will soon consider a measure that would allow the agency to craft first-of-its-kind rules for AI-generated robocalls.
The agency next month will decide whether to pursue efforts to formally define AI-generated calls and adopt new rules that would direct callers to disclose to consumers if they are using AI in such conversations. It would also seek to maintain “positive” use cases where an AI-powered voice can help people with disabilities use their phones.
The agency already declared voice cloning technology in scam calls illegal, following the deployment of an AI-generated voice of President Joe Biden in this year’s New Hampshire primary that urged voters to not go to the polls. But not all AI-generated phone messages are necessarily used for scamming. The rule would aim to help clarify this dynamic while still keeping consumers shielded from misinformed messaging.
The FCC will vote on whether to pursue the measure at its Aug. 7 open meeting, the agency said in a press release. The proposal builds on an agency inquiry launched in November seeking further information about AI’s implications for the telecommunications space, including the evolving technology’s impact on robocalls and robotexts.
Spam and robocalling operations have been traditionally carried out in environments with human managers overseeing calling schemes, but AI technologies have automated some of these tasks, and robocall networks have begun to leverage speech and voice-generating capabilities of consumer-facing AI tools available online or on the dark web.
The expected vote also comes amid heightened fears that AI systems will supercharge the spread of election misinformation and disinformation in November and beyond. AI-made materials have already seeped into election campaigns around the world.
The potential rules follow a related proposal announced in May that would consider disclosure requirements for AI-generated materials on radio or TV-based political advertisements. The agency did not say if it will also take up this item next month.
A May survey from cloud call center provider Talkdesk said 21% of respondents expect their vote to be swayed by election deepfakes and misinformation, while 31% said they fear being unable to reliably distinguish real and fake election content.
A new proposal from Rep. Rick Allen, R-Ga., would make calls created by artificial intelligence tools “subject to the same regulations and standards as traditional telephone-based systems,” Nextgov/FCW reported last week.