Experts warn that OMB’s AI guidance could slow federal adoption of the emerging tech
Commentators say proposed definitions of “rights-impacting” and “safety-impacting” use cases could wind up saddling low-impact AI programs up with high-impact controls.
A number of AI stakeholders and experts — many of them from technology industry associations and trade groups — are warning that forthcoming artificial intelligence guidance for federal agencies could stifle the use of even low-risk AI in the government.
The Office of Management and Budget released draft guidance on the government’s use of AI shortly after the White House issued a sweeping executive order on the technology in late October. A final version is expected by late March and stakeholder comments are piled up in the docket.
That draft would require agencies to use minimum risk-management practices for AI tools, like real-world performance testing for systems deemed “safety-impacting” or “rights-impacting.”
Systems that meaningfully influence or control activities like functioning of electrical grids or decisions about government benefits, for example, both fall into lists provided by OMB of use cases that agencies should consider “presumed” to fall into these high-impact categories.
An OMB spokesperson told Nextgov/FCW that “the draft guidance takes a risk-based approach to limiting AI harms, limiting increased safeguards to contexts in which the use of AI poses a meaningful risk to the rights and safety of the public.”
But some stakeholders are still skeptical that the guidance could mire AI adoption in red tape, as use cases that could be swept up into those definitions would be subject to a list of new processes and requirements.
“I’m very concerned that it’s going to lead to a risk-averse view on AI when we right now need to be embracing the technology,” said Ross Nodurft, executive director of the Alliance for Digital Innovation, during a Dec. 6 House Oversight and Accountability subcommittee hearing on AI policy. The alliance is made up of government contractors.
“There is a gap between the guidance that is currently being provided and the way that that guidance can be realized at the different agencies,” he continued. “That delta is going to cause people who are individually empowered authorizing officials trying to leverage this AI to make decisions on whether or not it’s good or bad based on some of the definitions that can use more specificity, frankly.”
“The definition of ‘rights-impacting’ could capture non-high risk AI utilization,” the Chamber of Commerce wrote in a comment. Systems deemed rights-impacting are subject to additional minimum risk management requirements beyond safety-impacting systems in the draft guidance.
Several technology trade groups responded to the draft guidance with concerns about OMB’s definitions for AI systems.
The Information Technology Industry Council wrote that “as it currently stands, OMB is defining virtually all applications as high-risk and therefore making it difficult or impossible for federal agencies to adopt AI tools.”
The Software Alliance warned about ambiguity in OMB’s definitions of what is high-risk and murkiness around the threshold for triggering minimum practices, and the Software and Information Industry Association also flagged the potential for low-risk activities to get swept up as high-risk.
Industry groups aren't the only ones with concerns.
“The welcome policy will face a persistent challenge: people fail to recognize high-risk AI use cases,” a comment from the nonprofit Center for Democracy and Technology reads, suggesting OMB clarify the guidance and provide support, such as a cross-agency working group, to help agencies categorize their use cases.
Even defining AI more broadly is a challenge.
“Neither the scientific community nor industry agree on a common definition for [AI capabilities],” the Government Accountability Office notes in a recent report. “Even within the government, definitions vary.”
“While the OMB draft memo is admirable in many respects, key parts may have the unintended effect of tying innovation up in red tape,” Daniel Ho, law professor at Stanford University and director of the Regulation, Evaluation and Governance Lab, told Nextgov/FCW.
“That does not mean that outsourcing is the solution,” he said. “What is needed is the development of technologists within government who can lead the adoption of responsible AI.”
He and group of five other academics and former government officials, including civic tech leader Jennifer Pahlka, warned in their joint comment that the minimum requirements for all government benefits or services in particular — listed in both OMB’s definition of rights-impacting AI and tally of uses presumed to be rights-impacting — could sweep up non-controversial, low-risk applications, too.
They urge OMB to narrow and clarify the definition of rights-impacting AI and distinguish among types of AI and types of benefit programs.
The minimum requirements for rights-impacting AI systems — which include human review, opt-out options and public consultation — combined with a wide scope “could threaten core operations across a wide range of government programs” and “impede important modernization efforts,” they write, especially within the context of already risk-averse government agencies.
"The OMB memo is exemplary in spelling out the opportunities and risks of AI. But process must be tailored to risk," Ho stated in remarks prepared for the hearing. "For example, the memo’s proposal that agencies allow everyone to opt out of AI for human review does not always make sense, given the sheer variety of programs and uses of AI. The U.S. Postal Service, for example, uses AI to read handwritten zip codes on envelopes. Opting out of this system would mean hiring thousands of employees just to read digits."