Party line differences emerge over AI oversight, international partnerships
At a Senate Commerce Committee markup hearing, Sen. Ted Cruz, R-Texas, offered multiple amendments designed to reel in federal oversight in AI software development that he says could impair innovation.
Ahead of the upper chamber’s August recess, the Senate Commerce Committee evaluated a flurry of bills focused on artificial intelligence, where support for proposed AI regulation in areas like government oversight and environmental protections fell along party lines.
Eight bills were discussed during the Wednesday markup hearing, which focused on a range of issues like developing AI testbeds, leading in standards development, jumpstarting a public awareness campaign on the risks associated with AI systems, expanding small business access to AI model training tools and more.
Ranking Member Ted Cruz, R-Texas, offered a host of amendments that had the common perspective that undue regulatory burdens could hinder the U.S. mission to lead in AI and ML development and standardization, handing geopolitical adversaries like China an advantage.
“China is just as happy to sit back and let the U.S. Congress do the work of handicapping the American AI industry for it,” Cruz said during his opening remarks. “To avoid the U.S. losing this race with China before it has even hardly begun, Congress should ensure that AI legislation is incremental and targeted.”
The Future of Artificial Intelligence Innovation Act of 2024, for example, would require AI developers to conduct environmental impact assessments, which Cruz filed an amendment to remove.
“If we want to lose the race with China, setting up environmental impact assessments for every single innovation is a really certain way to surrender to China and give up U.S. leadership,” Cruz said.
Committee Chairwoman Sen. Maria Cantwell, D-Wash., spoke in favor of standardization and regulation surrounding such new technologies.
“Government standard setting, done in collaboration with the private sector, is the main reason why we lead in innovation today,” Cantwell said.
The committee voted to reject Cruz’s amendment.
Two of his proposed amendments to the bill did pass, however. The first “no woke AI” amendment, says that no federal agencies can require AI policies to promote principles of critical race theory, or any other federally-constituted bias.
The second amendment would prohibit Big Tech-linked actors and consultants from working alongside AI policy development, stemming from an earlier investigation Cruz opened into Biden administration staff links to technology companies.
Cruz also offered an amendment to require congressional approval prior to the U.S. government entering into any tech-centric agreement with a foreign government.
“If Washington bureaucrats are going to backdoor the EU's regulations or tech policy into our federal agencies and the guidance they issue, at the very minimum, we should know about it and know what the hell they are doing,” he said.
Cantwell argued that while she agreed with Cruz’s call for more communication on international memorandums of understanding with lawmakers,the flexibility to make these alliances is an important part of staying globally competitive. Following a roll call vote, this amendment failed, along with another that would limit the scope in the Artificial Intelligence Research Innovation and Accountability Act of 2023 to high-impact AI use cases, like critical infrastructure and national security. Cruz cited innovation restrictions and costs as two drawbacks to the bill’s current language.
Aaron Cooper, the senior vice president of global policy at BSA Software Alliance, said that his organization continues to urge Congress to advance legislation requiring impact assessments into high-risk deployments of AI and establishing risk management programs.
“This workable approach generally aligns with the emerging policy consensus for how to best mitigate risks of AI, and national technology policies in the United States will provide clarity for companies developing and using AI,” Cooper said in a statement.
Cybersecurity firm HackerOne commended the commerce committee for passing the Validation and Evaluation for Trustworthy Artificial Intelligence Act, noting that the requirement for AI developers and deployers to collaborate with third parties to red team their software is key to avoiding cyberattacks.
“With the rise in AI threats, the VET AI Act is crucial for addressing vulnerabilities and risks before they impact society, marking a significant step toward responsible AI development and deployment,” HackerOne Chief Legal and Policy Officer Ilona Cohen told Nextgov/FCW in a statement.