Industry leaders urge Congress to enact responsible AI regulations
Some tech observers have suggested hitting pause on generative artificial intelligence development, but industry leaders told lawmakers that the tech could benefit from legislation that builds trust in AI.
Industry leaders called on congressional lawmakers to adopt new legislation that significantly regulates the advancement of artificial intelligence tools and solutions as part of an effort to ensure responsible AI development and protect against potential misuse and ethical concerns.
Executives from leading industry groups urged the Senate Committee on Commerce, Science and Transportation to support legislation that aims to build trust in AI while continuing to maintain U.S. global competitiveness and advance innovation.
"Congress should not wait to enact legislation that creates new obligations for companies that develop and use AI in high-risk ways," Victoria Espinel, CEO of the software trade group BSA, testified Tuesday. "The window to lead conversations about AI regulation is rapidly closing, as other governments are moving to shape the rules that will govern AI’s future."
While the U.S. has so far led in AI research and development internationally, other countries are beginning to form their own regulations and laws around the development of emerging technologies that could impact global competitiveness and cooperation. The European Union is expected to finalize its first comprehensive regulatory bill on AI by the end of the year, while Japan is spearheading a G7 initiative to establish common standards for AI governance.
A debate has meanwhile unfolded in the U.S. around the development of AI technologies, with some private sector leaders calling for a six-month pause in training AI systems to develop and implement shared safety protocols and ensure new technologies are rigorously audited for vulnerabilities and other potential concerns.
Recent surveys show that the vast majority of U.S. executives believe AI is critical to meeting their growth objectives and fear the longevity of their businesses if they are unable to scale AI systems, according to Rob Strayer, executive vice president of policy for the Information Technology Industry Council.
"Continued investment in AI research and development, by both the government and private sector, is essential for the United States to maintain its leadership position," Strayer told lawmakers. "Regulatory policies that encumber the ability of researchers and developers in the United States will drive investments and research activities into other countries."
Strayer added that private sector organizations will support pro-innovation policy frameworks that are risk-based and continue to advance investment in AI research, like the National Institute of Standards and Technology’s Risk Management Framework, which he said already "provides companies with a comprehensive way to think about risk management practices."
Rather than pausing AI development, the witnesses encouraged lawmakers to create meaningful guardrails for high-risk uses of AI, like requiring companies to establish risk management programs that identify and mitigate potential dangers across AI systems and conduct annual impact assessments.
"Congress can build on tools that already exist today — and require companies to adopt those tools to identify and mitigate risks associated with high-risk AI systems," Espinel said.