Obernolte calls on fellow lawmakers to tackle sector-specific AI regulation

Jay Obernolte speaks onstage during the HumanX AI Conference 2025 at Fontainebleau Las Vegas on March 09, 2025 in Las Vegas, Nevada. Obernolte encouraged his fellow lawmakers to take action on the recommendations laid out by last Congress' AI Task Force at an event April 2. Big Event Media/Getty Images for HumanX Conference
The chair of the House AI Task Force in the last Congress said federal regulations need to unite the country’s AI policy as states advance different policies.
Rep. Jay Obernolte, R-Calif., doubled down on his calls to advance federal artificial intelligence regulations in a sector-specific approach, as outlined in the report released by the bipartisan AI Task Force late last year.
Speaking during an Information Technology Industry Council summit on Wednesday, Obernolte emphasized that sector-specific regulation serves to effectively provide guidance on safely leveraging AI systems while allowing companies to continue researching and innovating.
“If you look at the risk management framework that NIST put out last year — which has been acknowledged as probably the most useful document for analyzing the potential risk of AI deployment that's been produced anywhere in the world — what the report makes clear is that the risks of deployment are highly contextual, so it matters very much what you're going to do with the AI when you evaluate what the risks are,” he said. “And that's incredibly important, because that means that something that's unacceptably risky in one context might be completely benign in another context.”
Without a federal framework, a bevy of bills have been introduced in state legislatures to tackle AI regulation. Obernolte said that too many states passing differing rules of the road for AI technologies will hinder the U.S. goal to further innovation in AI and machine learning.
“If we fail to take action in Congress, we are running the risk that all the states are going to get out ahead of us, as they have on digital bank accounts, and, in short order, we're going to have 50 different standards for what constitutes safe and trustworthy deployment of AI,” he said. “That's very destructive, not only for our ability to innovate with AI, but also … to entrepreneurialism.”
He added that Congress must take advantage of the legislative runway in the 2025 session to codify the sector-specific approach and other recommendations from the House AI Task Force. This would ideally include a central repository for AI model testing and evaluation methodologies.
“We can't expect all of our sectoral regulators to be experts on AI and the different risks and failure modes that come with it,” he said. “So we need to equip them with a toolbox of not only testing and evaluation methodologies, but regulatory sandboxes for testing … potentially malicious AI, [in addition to] pools of technical talent that they can rely on and draw on to give them the expertise that they might not have, so that they don't have to reinvent the wheel in each of our different agencies.”
Other discussions Obernolte is having with fellow lawmakers include the data privacy threats posed by more advanced agentic AI systems — which can execute hyper-specific tasks through autonomous decision-making and continued learning — as well as threats from Chinese-made large language model DeepSeek.
“We don't know where that data is going, and we suspect that it has the same vulnerabilities that TikTok did,” Obernolte said. “I think we're going to have a monumental task ahead of us of educating Americans about the need to be careful and the need to establish a relationship of trust with whoever is acting as your agent, particularly if it's then done by automation.”