How Congress is gearing up to take advantage of generative AI
The House is working on new AI specific policies, guidance and training.
Lawmakers aren’t only looking to regulate a rapidly changing artificial intelligence landscape. Some on Capitol Hill are also pushing the internal use of the technology to their benefit.
“We must ensure Congress is ready to manage the risks AI poses,” Rep. Bryan Steil, R-Wis., said last week, “while leaning into its rewards.”
Steil chairs the Committee on House Administration, which held a hearing on AI in the legislative branch last week. That hearing follows a similar one the week prior held by the Senate Rules Committee.
To stakeholders, they’re evidence of a bipartisan interest in capitalizing on the technology’s potential for Congress and the result of increased tech capacity on the Capitol Hill over time.
“Congress isn't known as being cutting-edge on technology,” admits Aubrey Wilson, former deputy staff director for the House Committee on Administration’s majority. But, she says two recent hearings on using generative AI, "shows a really amazing evolution."
“I really see the benefits of generative AI as a huge capacity-builder for the legislative branch," Wilson, now the director of government innovation at the PopVox Foundation, told Nextgov/FCW.
Generative AI has the potential to help Congress improve internal operations, inform policy and even improve trust in the institution via increased transparency.
“The excitement around… this more advanced and this more kind of consumer-facing version of AI is real,” Yuri Beckelman, chief of staff for Rep. Maxwell Frost, D-Fla., and former staff director of the Select Committee on the Modernization of Congress, told Nextgov/FCW.
“It's seen as something that could truly help people better serve their constituents, better ask questions that inform policy, better help with operations, improve getting information to constituents,” he said. “There's a lot of interesting applications and people are excited about it.”
So far, House staffers are already using AI to help respond to constituent correspondence; jumpstart communications like memos, briefings or speeches; summarize lengthy content; and more, according to a December House Administration report on AI.
The House first approved ChatGPT Plus for use last summer after securing changes to the product’s terms and conditions, said John Clocker, the deputy chief administrative officer for the House — whose office offers services and business solutions to House members, offices and staff — during the hearing last week.
His office has done an assessment of the House’s governance structure based on the National Institute of Standards and Technology’s AI Risk Management Framework, which resulted in identified actions to improve that governance, he said.
His team will be developing new, AI-specific policies for the House, as well as guidance and training, he said, also encouraging member offices to make their own guidance for staff, too.
The Senate, meanwhile, already released guidance in December around the use of certain generative AI tools.
Beckelman said that, so far, he and his staff aren’t using generative AI tools as they wait to see more policies, guidance, lessons learned and best practices from other offices.
Among the challenges and concerns associated with generative AI is the amount of sensitive information in Congress and how that could be ingested by such tools. There are also concerns about the accuracy and potential for bias of generative AI tools and the potential for them to be slotted in as a replacement to genuine connections between lawmakers and their constituents, he and others told Nextgov/FCW.
Bad actors also have access to generative AI and other types of powerful technologies, Clocker pointed out to lawmakers, noting that “we must evolve and be increasingly vigilant about the AI tools and websites we access. Just because an employee can access a site through their House device does not mean it is safe. We need to develop AI-specific policies and processes. We need to develop guidance and training opportunities for House staff.”
This increased buy-in for AI and attention to the associated risks stems from an increased tech capacity on Capitol Hill over time, stakeholders say.
“Congress's IQ on technology and data has just really improved over the last five, six, seven years,” said Lorelei Kelly, a researcher who leads the modernizing Congress portfolio at Georgetown University’s McCourt School of Public Policy, pointing to Tech Congress and the Select Committee on Modernization as examples.
The Bulk Data Task Force, now called the Congressional Data Task Force, and the House Digital Services team also support Congress’ greater technological capacity, said Beckelman.
Member offices aren’t the only ones tapping into generative AI — other legislative branch offices and agencies are, too.
Judith Conklin, chief information officer at the Library of Congress, said that the library is working with the Congressional Research Service to experiment with using AI to create bill summaries and deploying natural language processing tools to identify similar bills and assign them to the right internal analysts.
The Government Accountability Office recently deployed a large language model with GAO-specific information, said Taka Ariga, GAO’s chief data scientist and director of its Innovation Lab.
And Hugh Halpern, director of the Government Publishing Office, told lawmakers last week that the GPO is working on three AI pilots and put in place a governance directive for AI last year. GPO maintains online access points for government information, in addition to its publishing role in government.
“Our vision is for an America informed,” he said, “so to the extent that AI tools can increase our speed and accuracy in delivering that kind of transparent, primary information to those folks trying to figure out how Congress is working, that is a benefit to the agency, and we think ultimately a benefit to Americans and citizens of the world.”