Making AI work inside and out at SSA

JHVEPhoto/Getty Images

Meet Brian Peltier, the deputy chief information officer for strategy and chief artificial intelligence officer at the Social Security Administration.

Brian Peltier’s role at the Social Security Administration spans more than just the chief artificial intelligence officer. He is also the deputy CIO for strategy, with oversight of the agency’s enterprise architecture, innovation, financial and talent management arms.

But the CAIO position has become a significant part of his work, with all of the governance and attention artificial intelligence has garnered at SSA. Peltier sat down with Nextgov/FCW  to discuss that focus and what the technology could hold for the agency. This interview has been edited for length and clarity.

NextGov/FCW: How do you view your role? How much of it is activity from the Biden administration’s AI executive order and associated guidance? What is your job?

Peltier: My goal is to oversee all use cases at the agency, what they do, how they're doing it, to make sure they're in compliance with the executive order and the OMB memorandums. Really trying to make sure also that we're understanding the best cases for artificial intelligence, make sure we're communicating and training people on that — of the good and the bad, I’m going to say it that way. Meaning that people are aware of it. What does it really mean? You see the news that, “Oh, artificial intelligence is going to take over the world.” Well, no, it's not. Those are things that people need to understand a little more about — how it works. What does it do? What does it do well? What does it not do well? And communicating that to other stakeholders inside the agency, and then making sure that when they do want to use it, we're using it effectively, in the right way: ethically, unbiased, preventing any rights-impacting, or safety-impacting [consequences], to our customers [and] employees, to make sure that we're actually just being effective at what we do with the artificial intelligence.

NextGov/FCW: What’s your biggest priority as CAIO? If you can’t pick just one, I’ll take a few. 

Peltier: The biggest thing is to make sure we have the right governance structures over it, to make sure that we're actually monitoring our use cases, to make sure we're … preventing undue burden or harm to our customers or even our own employees, right? We're currently working on building that governance framework up. That's in alignment with the [executive orders] and also the OMB memorandums, just to make sure that we're doing the right things. We have a great team we've formed, the responsible AI team, which basically is a multidisciplinary group who are across from like our legal teams, our civil rights and civil equities teams, our analytics teams, bringing them together to make sure that the guidance that we're providing is consistent and the best for our [customers and employees], to make sure that they can use it safely and efficiently to improve their effectiveness at delivering services to the American public. So that's one of the biggest things. 

But the other one I would jump on is communication… Communicating what [AI] is, what it's not, and what you should be doing with it to make sure that we're moving in the right direction. There's a lot of confusion with artificial intelligence right now and just talking to people about what it really does and how it really works. 

Because I think a lot of people think things are too far or it's gonna take over the world. We have to talk them back, things like that. That communication is kind of a key critical component that we need to make sure people understand how it works in layman's terms, to make sure that they have an understanding of what reality is and that it's not going to solve all of our problems either.

NextGov/FCW: Are you working at all on AI acquisition and/or workforce stuff? 

Peltier: From the AI acquisition aspect of it, we are looking to determine a better platform to use artificial intelligence as a foundation for us. We are looking into that right now. Our CTO is doing that exact work to help with it. 

Since artificial intelligence provides responses, especially when you get into genAI — I’m going to be very clear on this, the agency has been using artificial intelligence for 20 years. That's a key, critical thing that people don't understand, and we've been using it for that long now. The explosion that you just mentioned was around generative AI, which is a completely new thing that is more humanlike and interactive, which is amazing. We have not really dipped our toe deep into the generative AI stuff. We are looking, we're testing, and we are looking for things to assist us in building that out from a procurement perspective. Again, we're taking it slow, because the fact that we want to make sure we're doing all the right things and to prevent any impact to, as I've mentioned before, the civil liberties, civil rights bias, transparency and stuff like that. So we're moving in the right direction with that, to try to find ways to take advantage of it, the right ways to help our employees become more efficient, to service their customers more effectively as well.

NextGov/FCW: What are some of the unique considerations for buying generative AI?

Peltier: One of the challenges that we’ve been struggling with is just, they've trained the models, and then it's kind of like a black box with us, right? And it's kind of hard sometimes that we have to feel comfortable… and then the training aspect of when you procure those. We have to test it, and the challenge is, how do we test it effectively? Because being generative AI is probabilistic, it could respond differently every time. And making sure that contextually the answer is accurate, for one, and that we can leverage it and use it, is the second. So there are models that are out there, as you know, that can help assist with that. But those are just the challenges [of] understanding how they trained it, if we can get that from them or test it to make sure that we have high confidence that what we're going to get is consistent… and it helps us prevent any impact to our customers, the American people. So that's the biggest thing that concerns me the most.

Nextgov/FCW: What about workforce? Is hiring or training people on AI part of your job?

Peltier: Yes, [it] 100% is. The training aspect of it — it just again goes back to the communication, the do's and don'ts, like what you should and what you shouldn't do [with] it and helping them understand it. We've had some training classes recently to help train up some of our development community on it. But again, it's not just the development community we have to train. 

We put out an optional training agency-wide. It's in our workforce learning area to help them be educated on what artificial intelligence is, and what to do [and] what not to do. Just because of the fact that it's so ubiquitous everywhere, where everyone has a copy and everybody has ChatGPT, and everyone has these things.

You should always go back to validating the responses from these generative AI’s — because you don't know how they were really trained — to determine if they were right. I've had experiences where we've gone through stuff and the answer we got was not correct at all, which makes it entertaining in that people sometimes, as you know, take things for granted… We always say, validate the responses that come from them, just to make sure that it is accurate, because it can be wrong. So that's that communication aspect again. . 

Nextgov/FCW: About how big is the team that works on AI? 

Peltier: I don’t have an exact number. So that's where it gets interesting, because it's probably going to be between five to 10, if not more. And this goes back to – I mentioned before — We have a responsible AI core team. It is a small core team, which is just specifically inside of the CTO shop, which is like three to four people, if not a little bit more. 

But when you expand past that team — that's the multi-disciplinary people who are like our legal, our analytics experts, our privacy experts, our civil rights, civil liberties and our security, people — that turns into 10 or more people that help advise and gain insights to the answers and what we're doing, from our government's frameworks to our responses to inquiries and stuff like that. We definitely have a relatively large team to help keep an eye on all the different aspects of artificial intelligence. 

NG/FCW: And I'm curious, are you guys planning on growing those various AI related teams or hiring more people to do this work?

Peltier: Yes, we are definitely looking to expand that team to bring in more experts. When we got this, we had some experts in-house that were there in different areas. We're looking to grow that team to use it efficiently and effectively across the agency and provide guidance to other teams who are trying to implement, so we definitely will be hiring to expand that team.

NG/FCW: And what resources do you have? I'm really curious about this one, actually, because of how the budget cycle works. Do you have a line item? How are you getting your money? Is it sufficient to do your work? 

Peltier: We have line items that we use inside of the CTO shop that handles that piece, basically, that we build money towards. And we do actually get specific budget allocations with it as well, to make sure that we're getting the resources we need. You know, as always, the budgeting aspect is always sometimes a challenge with any government agency. Specifically, you’ve probably heard from our commissioner [that] there's some challenges there as well, that would be nice to overcome, to actually have a larger group to focus in on this right away. But we will overcome those. But yes, we do have a line item there that is funding. We staff that with either FTEs, internal employees, or… add individuals like contractors and stuff like that, who have specific expertise to help us. 

NG/FCW: What’s the biggest challenge you’ve been facing in this work so far? 

Peltier: The biggest challenge… is really just communicating, just making sure people really understand what [AI] is, how does it work — especially in the generative AI space. People think it can do everything, when it cannot. People think that generative AI can be a magic bullet when it is helpful, but they're still iterating with it to make it more effective. 

The secondary one… is the pace of change with artificial intelligence right now. [It is] so rapid. it's hard to keep up sometimes with the pace of change…. because it's just every day — something new, something new, something new, something new. And then even with the pace of change… Every vendor, as you just said, is incorporating artificial intelligence everywhere, which turns into, now I have more I have to manage, because it's coming in from so many different angles. And trying to make sure that people are using it effectively and efficiently can be a challenge. We're working to overcome that.

NG/FCW: Do you have any thoughts on how the role might evolve over time? 

Peltier: I already see the chief AI role moving towards an advocate of when we should use artificial intelligence to improve the efficiency or effectiveness of our workforce and of services to the public. Really being that vocal voice of, “Hey, this is where it fits. This is what it does. It really should move this direction.” Really kind of help us build out the strategies and directions to make sure we're serving the public the most effective way and/or even our employees, because we can't forget about them at all. They're one of the most critical assets we have.