Senate bill mandates disclosures on AI-made material
The bipartisan legislation calling for the labeling of AI-generated content follows similar efforts made in the House earlier this year.
Two senators are aiming to distinguish the content created by generative artificial intelligence systems from human-made content with new legislation, as deceptive online content stands to proliferate misinformation.
Introduced by Sens. Brian Schatz, D-Hawaii, and John Kennedy, R-La., the AI Labeling Act of 2023 would require developers to include notices on AI generated content, like image, video, audio and multimedia products. The disclosure itself should be “clear and conspicuous” pursuant to the bill, which would both be embedded in the content’s metadata and be either permanent or difficult to remove by online users.
The responsibility of adding these disclosures would fall on the developers of the AI systems, echoing the Biden administration’s broader push to hold Big Tech companies more accountable for developing safe AI softwares.
“It puts the onus where it belongs: on the companies and not the consumers,” Schatz said during remarks on the Senate floor on Tuesday. “Labels will help people to be informed. They will also help companies using AI to build trust in their content.”
Schatz added that the motive for introducing the bill stems from the increased volume of scams and other threats linked to generative AI systems.
The new Senate bill resembles legislation introduced in the House in June of this year, which would have similarly mandated disclaimers on AI-generated material. Both proposed bills would make the Federal Trade Commission responsible for enforcing such labels.
The bill comes as the White House is expected to unveil a broad AI executive order outlining the nation’s posture in developing and adopting more AI systems and devices.