New bill proposes warning labels to AI-generated material
The AI Disclosure Act seeks to combat potential disinformation created using the emerging technology.
Seeking to stay ahead of potential disinformation that could result from recent advancements in artificial intelligence, a new House bill proposes an automatic disclosure of content created by the technology.
The AI Disclosure Act, introduced by Rep. Ritchie Torres, D-N.Y., on Monday, proposes to add a source disclaimer to any AI-generated output.
“Artificial intelligence is the most revolutionary technology of our time. It has the potential to be a weapon of mass disinformation, dislocation and destruction,” said Torres in a statement. “There is danger in both under-regulating and over-regulating. The simplest place to start is disclosure. All generative AI should be required to disclose itself as AI. Disclosure is by no means a magic bullet, but it’s a common-sense starting point to what will surely be a long road toward federal regulation.”
Under the bill, the Federal Trade Commission would oversee enforcement of the measure through its regulatory authority regarding unfair or deceptive acts or practices, which could lead to civil penalties for violations.
Torres said in a statement that the bill would cover “videos, photos, text, audio and/or any other AI generated material.”
The new measure is part of a growing list of AI-centric efforts on Capitol Hill. Last month, Rep. Yvette Clarke, D-N.Y., sponsored a bill that called for AI disclosure requirements for campaign ads, after the Republican National Committee released such an ad made with generative AI.
The AI Disclosure Act was referred to the House Committee on Energy and Commerce for consideration.