Industry awaits how OMB AI guidance on paper will be implemented in practice

Douglas Rissing/Getty Images
The Trump administration’s new artificial intelligence use and acquisition guidance touches on safeguarding civil liberties, but experts have expressed skepticism about its implementation.
Advocacy organizations are keeping an eye out for how the Trump administration will be implementing revised guidance on the use of and contracting for artificial intelligence systems, following the Office of Management and Budget’s release of the two new memorandums last week.
Both memos prioritize ensuring new AI systems keep federal agencies and offices efficient and reducing bureaucratic backlog –– a Department of Government Efficiency tenet –– alongside other familiar talking points on good AI governance.
These include setting “strong safeguards” for civil liberties and privacy, reviewing and implementing risk management protocols before and after acquisitions, maintaining chief AI officers to coordinate individual agency efforts and ensuring their models are open source and shared in public federal repositories.
“President Trump recognizes that AI is a technology that will define the future,” Lynne Parker, principal deputy director of the White House Office of Science and Technology Policy, said in a press release. “Today’s revised memos offer much needed guidance on AI adoption and procurement that will remove unnecessary bureaucratic restrictions, allow agencies to be more efficient and cost-effective, and support a competitive American AI marketplace.”
The memos’ emphasis on safety and prioritizing civil rights in government’s use and procurement of AI leaves industry analysts with one big question: How will that emphasis work in practice?
“On paper, this revised guidance maintains core AI governance requirements for federal agencies, including governance structures for managing AI systems within agencies and heightened risk management practices for high-risk systems,” Quinn Anex-Ries, a senior policy analyst at the Center for Democracy and Technology, said in a statement to Nextgov/FCW. “But in practice, it remains to be seen how faithfully this guidance will be implemented.”
Anex-Ries added that, although it is “encouraging” to see new guidance from the White House taking AI risk mitigation seriously, it contrasts with reports of DOGE’s alleged use of AI systems that could compromise the sensitive data that both OMB memos seek to protect.
“The true test will be how the Office of Management and Budget works with all federal agencies, including DOGE, to implement these critical requirements in a timely and transparent manner,” Anex-Ries said.
Experts at the American Civil Liberties Union echoed Anex-Ries’s concerns.
“Moving forward, the ACLU will continue monitoring the implementation of the guidance and advocating for stronger guardrails where protections have been scaled back or AI poses new harms,” Cody Venzke, ACLU senior policy counsel said in a statement. “Before using AI to decide who gets a job, mortgage, or federal benefits — and more — federal agencies must first make sure that AI is up to the task, fair, and safe — and discontinue it when it's not.”
Advocacy organizations specifically focused on AI issues told Nextgov/FCW that the new memos are a “positive step forward” in the evolving Trump AI policy landscape, specifically due to the balance they seek to strike between innovation and adherence to trust-centered governance frameworks.
“We appreciate the continuity shown in retaining foundational mechanisms such as the chief AI officers and their council, as well as formalized management of ‘high-impact’ AI applications,” Manoj Saxena, founder and chairman of the Responsible AI Institute, told Nextgov/FCW. “These elements are essential to maintaining institutional stability in a fast-evolving AI landscape.”
Saxena additionally said that, despite the positive attributes of the OMB memos, the Responsible AI Institute is concerned that they do not include protections against synthetic media that leverages an individual’s likeness without consent — the subject of multiple bills introduced in the last Congress, including the Take It Down Act.
“These are not abstract future risks — they are active threats today, and their exclusion from the high-impact examples is a missed opportunity,” Saxena said. “We encourage ongoing refinement of high-impact definitions and continued public reporting to ensure consistent, accountable, and responsible AI adoption across all agencies.”
As the administration begins to implement the stipulations in both memos, Information Technology Industry Council Executive Vice President of Policy Gordon Bitko told Nextgov/FCW that policymakers should continue reaching out to industry partners for help and resources.
“As the Trump Administration implements a pro-innovation approach to the federal acquisition and use of AI, we encourage policymakers to continue to leverage industry’s deep expertise on how to best streamline existing processes for adopting cutting-edge commercial AI technologies,” Bitko said.
The new memos are the latest executive AI actions to come out from the White House, following President Donald Trump’s January AI Executive Order. Under that order, the White House is set to debut a National AI Strategy in the coming months following an open industry comment period.