Report: agencies’ adoption of GenAI depends on safe and ethical principles

Vithun Khamsong/Getty Images

Governments need to adhere to ethical AI principles and comply with regulatory guidance to effectively use GenAI capabilities, according to a recent IBM report.

Government agencies need to prioritize the responsible adoption of emerging capabilities like generative artificial intelligence as they pursue their technology modernization efforts, according to a report released last month by the IBM Center for The Business of Government. 

The analysis — which included interviews with officials in the U.S., Canada and Australia — outlined a framework for how government leaders can implement new capabilities while working to overcome challenges with harmonizing and replacing legacy systems.

“Generative AI has the potential to play a key role in government technology modernization and transformation,” the report said, although it cautioned that “to leverage generative AI effectively, governments must ensure its use adheres to ethical AI principles and complies with regulatory frameworks.”

Guidance around federal agencies’ use of AI tools, however, has been temporarily upended by the new Trump administrations. 

Former President Joe Biden issued an executive order in October 2023 that outlined governmentwide safeguards around the safe, secure and trustworthy use of AI tools. This guidance was further strengthened by a March 2024 memo from the Office of Management and Budget that laid out further agency-specific requirements. 

President Donald Trump subsequently repealed Biden’s directive on Jan. 20 and signed a new executive order later that week calling for federal leaders to develop a new plan for his administration’s approach to AI.

IBM’s report said that adoption of AI should be handled carefully, with a focus on maintaining transparency and accountability while also working to mitigate potential harms from using vast amounts of often private or sensitive information.

“Generative AI can help transform this data into actionable insights, but agencies must ensure that AI tools comply with data governance policies and avoid creating outputs that could lead to biased or unfair treatment,” according to the framework. 

To carefully adopt AI, the report recommended that agencies promote AI-human collaboration; justify their need for the technology; be able to explain how the AI tool came to a decision; create a process for people to contest AI determinations; develop safety protocols, such as the creation of “an incidents tracking database to capture and act upon feedback;” and ensure that the AI system is producing stable results. 

Beyond focusing on the safe implementation of AI, however, budgetary constraints and more drawn-out approaches to modernizing and transforming legacy systems can also hinder the responsible adoption of these technologies.

In an interview with Nextgov/FCW, Dan Chenok — the executive director of the IBM Center for The Business of Government — said “in order to bring in in GenAI quickly and effectively, It's much harder to do if you are working on a traditional kind of waterfall, long tail cycle that that the budget tends to require.”

He said funding from the Technology Modernization Fund, for instance, could help drive more agile adoption of AI technologies across government, although he also noted that the fund’s pot of available money is limited.

The government could take the principles it uses for TMF and other flexible spending efforts, Chenok said, and then apply them “to that larger set of budgetary initiatives that are paid for and then set up budget and acquisition structures that enable companies and governments to work together to bring in these technologies.”

NEXT STORY: Trump signs AI executive order