Forthcoming House AI report will call for incremental regulation
Rep. Jay Obernolte, R-Calif., said he hopes the House AI Task Force’s report will debut sometime in the coming week.
A soon-to-be released report from the House AI Task Force will highlight applying more bespoke, incremental regulation and federal preemption for artificial intelligence technologies, as lawmakers seek the best route to apply rules to AI systems without stifling industry growth.
Rep. Jay Obernolte, R-Calif., discussed approaches slated to be included in the report during an Amazon Web Services sponsored event on Wednesday. The report, which Obernolte hopes to debut sometime in the coming week, will hinge on incrementally bringing focused legislation to help cultivate rules for AI.
“One of the guiding principles that we are embracing is the principle of incrementalism,” he said Wednesday evening. “We think it is foolish to believe that we know enough about AI and the direction AI is going to move in the next few years to be able to do an effective job completely regulating with one bill next year. So we think it behooves everyone to embrace the idea that we need to break this up into bite-sized pieces.”
Obernolte then anticipates that Congress will need to incrementally pass more application-specific legislation as AI technologies and use cases continue to grow. This approach stands in direct contrast to the other regulatory regimes that have approached the technology with large-scale action, particularly the European Union’s sweeping AI Act passed earlier this year and entered into force in August.
As the House report is set to be released soon, Obernolte said it will serve as a starting point for future guidance and regulations.
“Our hope is that this will serve as a guide to future Congresses,” he said. “This is obviously not the last word in AI. It is just the first word in AI.”
Obernolte was joined on Wednesday night’s panel by fellow Rep. Zach Nunn, R-Iowa, who underscored the need to consolidate internal government regulation on AI prior to doling out regulations for private industry. He also touched on the familiar needle U.S. regulators are looking to thread: considering AI model safety without hamstringing innovation.
“For all the innovation, for all the diffusion of capability that AI is going to bring to the table here, we need to recognize we have some very clear threat vectors that are also emerging,” Nunn said.
At least one chapter of the coming report will detail how federal preemption — federal law superseding state and local directives — can be applied to AI. Obernolte cited the need to avoid a patchwork of regulations of how to deploy AI so as to continue to spur innovation in the field, while still guarding against too much federal overreach.
“The issue is complicated,” he said “You absolutely cannot say ‘the federal government is going to preempt all of it,’ because there are certain uses of AI that are very much in the purview of state regulation. But … if we allow all 50 states to create this patchwork of 50 different regulations on what AI could be deployed and what can't be … that's going to create an atmosphere that not only is destructive to innovation, but is very harmful to entrepreneurialism.”
Obernolte also theorized the future of some federal AI programs ahead of President Donald Trump’s second term. During his campaign, Trump made it clear he would repeal President Joe Biden’s landmark executive order on AI, shedding light on the incoming Trump administration’s potential perspective on AI policy.
Obernolte said he anticipates the Trump administration to comb through what to keep and what to discard from Biden’s order, such as the use of the Defense Production Act to mandate the disclosure of model capabilities that could include proprietary information, as well as limiting broad executive branch power granted through the order.
“There are a lot of pieces of the EO that we didn't believe in, but that's not to throw out the entire context of what the executive branch was trying to accomplish,” he said. “So I think the incoming administration is going to have to weed through that and decide what to keep and what to throw out.”
Protecting the AI Safety Institute, another uncertainty ahead of the incoming Trump administration, is a top priority for Obernolte. Housed within the National Institute of Standards and Technology, the AISI acts as a consortium bringing federal scientific and policy leadership together with private industry and civil society advocates to help deploy and measure safer AI systems.
Calling the AISI critical to U.S. success in leading AI development and deployment, Obernolte said that its power lies in setting international standards for AI technologies and engaging different industries to design sector-specific AI regulations.
“[Sectoral regulation] has proven to be a very effective model, but it will only work if we empower those sectoral regulators with the resources and tools necessary to do their jobs, and those testing and evaluation standards are something that's part of [AISI’s] mission,” he said.
Editor's note: This article has been updated to correct the timeline for the forthcoming House AI Task Force report.