House AI Task Force wants to marry light touch regulations with sector-specific policy
Members of the bipartisan House AI Task Force discussed their priorities for artificial intelligence-focused legislation and how that might impact regulations.
Members from the House AI Task Force on Wednesday reiterated their priorities for artificial intelligence legislation introduced in the lower chamber, with a focus on keeping humans at the center of AI technologies and their applications while maintaining a light-tough regulatory approach.
Speaking during an interview with WTOP recorded in September and aired on Wednesday, Reps. Don Beyer, D-Va., and Jay Obernolte, R-Calif., shed light on what the task force will focus on in its forthcoming report. As members of the House AI Task Force, Beyer and Obernolte both emphasized their desire to eventually advance the 14 bills the group endorsed earlier this fall.
A spokesperson for Beyer told Nextgov/FCW that the House AI Task Force will be publishing a report at the end of 2024 on those 14 bills and their contents.
“I am cautiously optimistic that our report will be the furthest thinking document on AI that's been produced by the legislative branch so far,” Beyer said. He added that advancing legislation will hinge on continued bipartisan work.
“We've worked really hard, we have a couple pieces of legislation that aren't bipartisan, and the sad part is –– or the realistic part is –– they're not going to pass unless they get bipartisan,” he said.
Among the host of concerns these bills are looking to prevent are the spread of disinformation, threats to national security and cybersecurity, training database transparency and targeting deepfakes and other synthetic content.
Staying true to guidance echoed by private sector leaders, government coalitions and agencies, Beyer said that lawmakers are prioritizing keeping a human in the loop when deploying an AI system in sensitive environments and applications.
“A lot of what we're working on in Congress right now is trying to make sure that human beings are the final agents in the use of AI,” Beyer said.
Keeping regulations as light-touch as possible is also something both Beyer and Obernolte are aiming to bring to future regulatory bills. Obernolte said that the House AI Task Force is recommending an approach that will ask sector-specific regulators with extensive industry knowledge to help determine how to police AI tools in their specific fields.
“We call it the hub-and-spoke approach; we believe in sectoral regulation,” Obernolte said. “It very much matters what you're using the AI to do.”
He cited examples like the Food and Drug Administration already processing more than a thousand applications for the use of AI within medical devices and said decisions on automated cars and air traffic control will fall under the jurisdiction of the National Highway Traffic Safety Administration and the Federal Aviation Administration, respectively.
“The big AI leaders have been the ones coming to Congress a number of times the last couple of years, saying, ‘please give us some guidelines. Give us what the benchmark should be,’” Beyer said, contrasting the House’s legislative projects with the European Union’s sweeping AI Act, which places a majority of obligations on developers of high-risk offerings and outright prohibits certain systems. “They don't want to be like Europe.”
Prioritizing freedom to innovate and creativity within the AI space is another factor lawmakers will keep in mind as they continue to push legislation through both chambers. Obernolte said overburdening small- and medium-sized companies with too much regulation isn’t the goal.
“One of the things that's a little appreciated fact about federal regulation is that it almost always has the consequence of empowering large companies and disadvantaging small companies and entrepreneurialism, because the more complicated you make the regulatory landscape, the more legal resource that a company would need to deal with that,” he said. “We don't regulate tools. We regulate outcomes.”