Carnahan explains GSA's path to AI adoption
The General Services Administration currently has about 150 different AI pilots.
OAKLAND, Calif. — One big question for Robin Carnahan, head of the General Services Administration, when it comes to artificial intelligence is, “How do we keep up with the speed of change that we’re seeing in this and balance those opportunities?” she told an audience of civic technologists last week at nonprofit Code for America’s annual conference.
“Because it really is kind of what we have dreamed about for 20 years, that you can harness data,” said Carnahan. “We have the data and the talent and the capacity to actually use that for the public good, but we have to do it in a way that mitigates risks. So how do you strike that balance, I think, is the challenge of our time, particularly when it comes to democratic societies.”
She sat down with Nextgov/FCW at the event to chat more about GSA’s approach to the technology. This interview has been edited for clarity and length.
Nextgov/FCW: Where could AI be useful for the government? Does GSA have any use cases you’d be able to share?
Carnahan: We are doing a lot of piloting, experimenting with AI tools… We've got right now about 150 different pilot projects that are happening around GSA, as of a couple of weeks ago… using 132 different tools. We've got seven sandbox environments that are set up.
One of the things that I know the team has worked on… was something called a Gov CX Analyzer. So, basically, government websites often have surveys attached to them. I don't know about you, [but] a lot of people fill out surveys when they're really mad or maybe you’re really happy… So we're able to, with with this open source tool that the team built, not have to rely just on those survey responses that maybe a few thousand people decide they're going to say something, but actually get a view of what people are doing on these websites, where they're getting bogged down, where they're dropping off, where we could see that friction areas that we can reduce… That's a really interesting example of where we're using some AI tools.
Nextgov/FCW: You spoke onstage about the big challenge being balancing opportunity with risk management. That was a big focus of the recently finalized guidance from the Office of Management and Budget on AI in government — how is implementation going at GSA?
Carnahan: GSA’s job is to make it easier for agencies to buy things that they need. That includes technology that is going to use generative AI features, and so what we want to be able to do is think through questions and trade-offs that everybody's gonna need to make. We put out recently, and I hope you've seen it, the AI procurement guide… It lays out some sort of general guidelines about expectations — about privacy, about data provenance, about not selling people's data, about how do you deal with, if you have these authoritative government sources and public data and then you build on top of that, how is that accounted for?… It's really meant for procurement officers who have a really big task of, “How do we really lay out these guidelines in ways that industry can help us get to where we want to be?”
Nextgov/FCW: What are some of the biggest challenges of striking that balance of risk management versus opportunities?
Carnanhan: I think much of it is about just the speed of change. Things are just happening very fast, and so understanding how that can fit in, in a context where we have to have systems that are secure, they have to have FedRAMP authorization, right?
The key for us in dealing with our partners and industry is transparency. In the past, the government has tended to just like, buy a thing off the shelf, or [say], “We just need you to solve the problem for us.” That is no longer going to really be able to work because we know that so many of these systems have lots of component pieces to them, and we need transparency about that because our goal, ultimately, is to have a seamless experience for our customers, which means we need data to be able to move around, we need API's to be able to access… and not be blocked by proprietary things that can't talk to anybody else. So that kind of partnership is really what we need from our industry partners.
Nextgov/FCW: What can you tell me about how GSA is building these requirements, for transparency especially, into contracting language?
Carnahan: One of the things we're planning to stand up is a new federal advisory committee. We do these in government quite a lot, but we know that we're gonna get the best ideas if we actually bring a lot of voices to the table about what's really happening on the ground and what the real needs of agencies are, and all of these people need to be in a room to have these conversations. So we're gonna be standing that up.
Nextgov/FCW: That will be AI and procurement focused?
Carnahan: AI federal advisory committee — FACA… My favorite acronym, though, these days is an SBOM… It's a software bill of materials, so it's the same as if you look at food, the back of a package of food, you have the ingredients. That is what we need in our software, because we know that it's made up of a lot of parts and pieces, and we need to be able to have security provenance through all of that line, and so you're gonna be hearing more about that.
Nextgov/FCW: And is that relevant to AI as well, beyond the security aspect?
Carnahan: It’s relevant to all software we’re buying these days.
Nextgov/FCW: What about the data that feeds AI?
Carnahan: Generally when you speak about SBOMs, it’s just about the components, the ingredients. And we're talking about data, that's slightly different. But data provenance, like, “Where does this data come from?”... It really matters, right? And then there are issues of data ownership. The government’s got a lot of data… One of the things that tends to happen a lot is that somebody will take that data, do something with it, and then sell it back to the government.
So one of the things I'd like to figure out is how we can actually, when there's an improvement in the data… if you want to go monetize it someplace else, that's fine, but we ought to have a better deal as the government, since we're the ones who provided the data in the first place.
That is sort of a new way of thinking about some of these things, that this data that we have are real assets. Those assets belong to the American people. They have paid for those through their taxes, and we need to make sure we are using that appropriately.
There are lots of issues around the data models, about what goes into building the data models, and so that's some of the experiments and pilots that are going on right now. We know that theoretically, more data creates better models, but it has to actually be trusted data in the first place
Part of government's role here is to optimize for trust. No one else has that as their goal. That should be the goal of government. We’ve got authoritative data sources. We need to make sure they actually are accurate and trustworthy.
Nextgov/FCW: Should interested parties look for requirements or updates around transparency and data in contracting language? And how contracting works for AI?
Carnahan: Again, taking a look at the best practices guide that we just put out, we intend to address a lot of those issues. This is V1. We know this will be iterative and change based on what we're learning. But yeah, look, we need to be clear about what our needs are.
I'm a very big fan of not having everybody, forcing them all to reinvent the wheel. I often say, “In government, we all have the same problems, but we don't always have them at the same time,” and we really kind of do right now. This is a very new arena for everyone. We've got a lot of experts at GSA who’ve been very thoughtful about this, working across government with agency partners, talking a lot with industry. We just want to get some of this stuff down so other people can take advantage of it.