Lawmakers grapple with AI’s potential as they consider future regulations
In a week packed with artificial intelligence hearings and events, lawmakers are focused on bipartisan unity and threading the needle between further advancement and regulation.
Conversations surrounding artificial intelligence and how the federal government can best regulate and leverage advanced generative systems are dominating Congress this week. On the regulatory front, bipartisan teamwork on the Hill will be required for any comprehensive law governing AI systems and their applications.
On Tuesday, Sens. Martin Heinrich, D-N.M., and Mike Rounds, R-S.D. discussed the upper chamber’s strategy in crafting AI guardrails during a Washington Post forum, where they talked about a forthcoming private meeting in the Senate featuring major tech sector players to learn more about regulating the emerging technology.
The closed-door briefing is being led by Sen. Chuck Schumer, D-N.Y., and will host high-profile private sector leadership including Meta CEO Mark Zuckerberg, OpenAI CEO Sam Altman and CEO of Tesla and SpaceX Elon Musk.
Rounds, who is slated to co-moderate the session, said that the purpose of the closed event is to help legislators understand how to best regulate AI through reviewing how it operates.
“It's really a matter of trying to get out as much information for everybody as we possibly can,” Rounds said. “What we don't want to do is to regulate from a point of not having good information in the first place.”
Balancing risk and innovation within AI research is also at the forefront of legislators’ minds in crafting regulatory law, as well as the privacy implications from machine learning algorithms using personal user data to learn and output more accurately. Rounds noted that the latter will be difficult to incorporate into legislation.
“We know that we're going to have challenges making a determination about how you keep privacy and about what data can be guarded and how it can be used and whether it can be amassed in such a fashion that individual privacy is not infringed upon,” he said.
Rounds said that a “light” regulatory approach will likely work better than more stringent laws, but emphasized the need for bipartisan collaboration moving forward.
“I think probably one of the most important things that we do here is if we can keep this on a bipartisan basis where we're talking reasonable people to reasonable people,” he said. “I think there's a path forward.”
Heinrich clarified that this will depend on individual congressional committees to develop a final legislative product.
“It's going to be driven by what items can we actually create consensus around,“ he said on Tuesday.
A Tuesday hearing within the Senate Judiciary Committee also focused on threading the needle between regulation and innovation and featured William Dally, chief scientist and head of research at NVIDIA, and Brad Smith, vice president of Microsoft.
Both executives broadly advocated for cooperation between the government and industry as AI systems advance globally.
“We need a safety brake just like we have a circuit breaker in every building and home in this country to stop the flow of electricity,” Smith said regarding regulations for AI technologies that could work within critical systems.
Smith put forth the idea of a licensing regime as a “critical start” to better gauge the more risky applications of AI technologies, such as in critical medical or transportation infrastructures.
“I think that a licensing regime is indispensable in certain high risk scenarios, but it won't be sufficient to address every issue,” Smith said.
Dally noted that maintaining a human-centric approach to the operation of AI systems — a tactic other federal agencies in the Biden administration have also supported — will continue to mitigate inadvertent harm from an AI’s outputs.
“I think the way we make sure that we have control over AI of all sorts, is by for any really critical application, keeping a human in the loop,” Dally said.