Labor's AI mission spans public policy and internal agency use
Meet the Department of Labor's CAIO Mangala Kuppa. She's focused in large part on AI safety and making sure the tech tools promote workers' well being.
Mangala Kuppa is the chief technology officer and director of technology, innovation and engineering in the Labor Department’s Office of the Chief Information Officer. She also happens to be the agency’s Chief AI Officer, where she has relied on existing efforts and workforce initiatives to find success.
Kuppa sat down to talk with Nextgov/FCW about her work. This interview has been edited for length and clarity.
Nextgov/FCW: How do you view your role? How much of what you do is activity from the AI executive order and how much of it is other things that might not be in that order?
Kuppa: So the chief AI officer, the role is new but the underlying mission is not. As you know, [at] Department of Labor, we care about the welfare of workers. And we've been using technology to improve our mission outcome. AI happens to be a transformative technology that holds tremendous power. So in terms of the role, at the high level, it's really [four] things that we're trying to do. Some of them stem from the executive order and some of them from the [Office of Management and Budget] memo.
Labor has a dual role — we have some responsibilities in the executive order to, for example, release AI principles for the public to use. And DOL conducted listening sessions in doing so. So there's that external policy front responsibilities, which is handled by the Office of the Secretary's Office as well. And then the internal side is where the CAIO role comes in — how DOL is using AI. So in that space, there are three broad categories: Coordinating the internal use of AI, obviously, because the department is a very diverse department with 27 agencies, and so that's not a small feat; promoting AI innovation; and then the last would be definitely managing the risks from the use of AI. So those are kind of the broad categories of responsibilities and everything we do kind of stems from there.
Nextgov/FCW: What is your role in AI acquisition and workforce development? Do you guys do work on that at all in your role?
Kuppa: So the role is kind of centralized around responsibilities of AI, which includes a lot of swim lanes. So you touched on two swim lanes today. There is more. So from an acquisition standpoint, obviously, the memo that was released has some guidelines as well, on how to approach acquisitions. One of the things is to make sure that when we are procuring AI goods and tools, that they are secure, they're safe, and that we understand actually what we are procuring, as well as [that[ we are implementing them in a way that promotes workers’ well being.
And then in terms of resource and capacity planning, to kind of carry out the mission of responsible AI, you need people, you need resources, right? Most federal agencies, in my perspective, haven't really necessarily received additional funding to do this. At least with the Department of Labor, we have been flat-funded. I'm not saying that may be the case in the future, but… at the moment, though, we all are retooling our resources to meet what we need to do per the memo. And then what makes the AI workforce aspect a little bit more complex is in terms of capacity building with our resources: There is a huge interest in AI right now, across the spectrum, not just in the government, but also in the private sector. So we all are competing for the same resources, and the resource pool is limited. It's a fairly newer technology. So people are adopting it, but if you truly look at the resource pool, it is limited. The approach we are taking is we want to recruit [on the] career ladder earlier and then train them. And then we also want to look at our IT specialist categories and train them as well. So kind of retool, and then provide the training opportunities [is] how we're trying to look at it.
Nextgov/FCW: You mentioned retooling resources for this work. What does that look like? Is that just for AI hiring or is that more broadly?
Kuppa: Currently, it’s AI hiring. With respect to broader impact, of course, AI is still new, so ultimately AI is a change management exercise. I jokingly say it's change management on steroids because the time to learn, the time to adapt, the time to implement is crunched down into literally months and weeks — that is, every other day there is a new development in AI and keeping up itself is a huge task. So realizing that … [the] first thing we want to do is educate our workforce on what is there. So we have a heavy emphasis on training and literacy programs.
The second piece to that is, ‘Where can you use AI and where should you use it?’ I always tell my customers that AI can be used everywhere, [but that] doesn't mean we use it everywhere? So with the resources you always want to prioritize and and look at where you want to improve the services of value to the public, and things of that nature.
While we're doing that, we also have to build the infrastructure and the data that is essential to actually perform any meaningful use cases. I would say just using like, for example, chatGPT, I always say that the DOL has a bit of a leg up in this space because we didn't start AI today. I remember I was part of the Bureau of Labor Statistics, which is one of the agencies under DOL. I worked there for 10 years before moving into the central office. We had autocoding examples, custom model development examples, dating probably back to seven to eight years, if I remember correctly. And then within the Office of the Chief Information Officer, we have an AI set up in the last four to five years actually. So we have use case examples of natural language processing and other things like that. We do have those foundational things in place.
What helps the department really is that our acting secretary has set a vision, a worker-centric vision, for us. In addition to complying with the executive order and the [Office of Management and Budget] memo, we as the Department of Labor, are very focused on making sure the AI actually is used to enrich jobs and it has a positive impact and is improving our services. So that's a core center and theme of everything we do right?
[But] sometimes even just to find data is not an easy task. So with the department, we have actually centralized all of our mission system support under OCIO. It's been, I think, three to four years, so that helps us tremendously. We have had a lot of data initiatives. On the infrastructure side, we had set up an enterprise data platform… And we have some experimentation already and some examples, some use cases in production using AI, even the Gen AI. So our goal now is building on that, enhance the knowledge of AI across the department, work with our agency partners to choose the right use cases. And then when we embark on implementing those use cases, we want to make sure that that's done responsibly.
Nextgov/FCW: Who do you report to at DOL?And how many people are on your team? Do you have people that report to you?
Kuppa: So, I report to the deputy chief information officer. And then, of course, all the leadership that comes with that, our chief information officer.
I'm also the chief technology officer, so my role is a dual role as well. I don't think they're necessarily separate by the way. Technology is a big piece of AI as well.
I have within the team my, my direct report... I think we're close to 20 people, but then please keep in mind that they’re not all AI people because we have a chief technology officer responsibility as well. If I just narrow it down to AI, I think we are, from a federal workforce standpoint, we are a small team. We're looking at probably… around 10 at this point in time.
Nextgov/FCW: And what are your plans for growth?
Kuppa: In terms of growth, what really happens is that each of our agencies are appropriated for their missions. So it is no different than how we carry out business in the non-AI space… and that's why we are so focused on partnering with other agencies, and that allows us to have meaningful discussions around AI. And then when our agencies kind of have a good sense of where AI can improve the mission for the American public, then they can prioritize those initiatives… and then that's where the growth comes from.This happens to be the budget cycle, so we're trying to ask for resources because the governance piece does not really come from the agency use cases, right? The governance piece needs resources. A less than 10 people team isn't going to cut it. So we do want to grow of course. It's the usual process of going through budgetary cycles, requesting funds and seeing where we land, so we are also exploring those options. Definitely we do want to use AI from a growth standpoint. We do believe that AI offers tremendous opportunity to improve government services. Of course, the key there is to be responsible with it and safeguard ourselves from the risks that AI presents.
Nextgov/FCW: I'm curious about what's your biggest priority?
Kuppa: I look at priorities as there are multiple swimlanes we need to make progress in. And within each swimlane, there is a priority I can talk about. For example, if you think about policies and practices — not just what is in the executive order and the memo and our acting secretary's vision — the priority there is to make sure that we have actionable practices built in so that it's repeatable. In that space, even prior to the memo, we had partnered with the academic institution, Stanford, and we had actually established an AI principles-to-practices guide. And we had undertaken use cases… We have implemented those practices in a couple of use cases and we documented all that so there is a reference of, “How do you go about implementing an AI use case responsibly?”
When the memo came out and the executive order came out, we went back to that guide and we revised it to make sure that it is in compliance with the practices recommended in those two documents, as well as our acting secretary’s vision, so we wanted to ensure that. The next step in that swim lane is to further train resources. The process has to be repeatable.
The other one is literacy. I use literacy a bit more generally, in the sense that it's not just about understanding AI technology, it's also understanding where it can be used. And the understanding is different for each stakeholder. An executive needs to understand, at the highest level, what [of] this technology presents opportunity, versus risks, versus everything. Whereas somebody who is an AI resource staff on my team needs to know more detail about how to actually use and implement. We had developed a 101 training internally as a baseline to provide to all of our employees. I think that's been given multiple rounds of training and it continues to be something that we would focus on. In that swim lane, we have made some progress, and we continue to build on it, and then we want to keep that at the center for the next couple of years, honestly. AI literacy and training is a continual process to educate every stakeholder.
On the infrastructure side… we have the ability to access all foundational [large language models], whether it be OpenAI, whether it be [Google's] Gemini, whether it be [Meta's] Llama, whether it be [Anthropic's] Claude. As a government entity when you're trying to access something, it has to be FedRAMP-ed, right? It has to be secure. So having those contracts, having the ability to reach them, that's already in place. There, the evolution is to bring new tools that could be of great value in implementing use cases.
It's hard for any CAIO to give you one priority, unfortunately, it’s not a space where it’s one. You have to make progress in all these swim lanes.
And last but not least … is, you may have practices, you may have procedures, but how do you centrally coordinate responsible use of AI? So what we have done is develop a framework called Impact Assessment Framework. The goal is that when a use case is thought of or [going] to be developed, the customers who need that come to the [CAIO] office, which is my office. The first thing we do, actually, is to look at the impact of that use case — which is actually one of the requirements from the memo — but we have already taken the next step in building that framework, so that it's actionable and it's operationalized, so to speak. So we look at the use case and we look at it from a risk categorization — is this high risk, or low or medium — and the way we define high is obviously what the memo calls safely-impacting or rights-impacting use cases. And then low would be, you're just using ChatGPT for day-to-day kind of productivity and the medium would be kind of in between, generally speaking. So with that framework, it allows us to really look at each use case, not just from a technology perspective but policy perspective..
Nextgov/FCW: Biggest challenges? And how are you working to address them?
Kuppa: I think I think the pace of technology is definitely challenging and keeping up with that cannot be undermined. The other one is, I would say, there's a lot of, unfortunately, misunderstanding and misinformation about AI: you tune in to TV, you'll hear about AI, you log into YouTube, you'll hear about AI. There is a lot of information out there, and it is not clear how much of that information needs to be taken as good information or not. So a challenge for all of our organizations is to provide the right information about AI. And I think that exercise of educating people is a change management exercise and that definitely is something that is a challenge for everyone.
Nextgov/FCW: And how do you see the role of the chief AI officer in government evolving over time?
Kuppa: In AI, I always say that no one has a crystal ball, unfortunately, because it is such an evolving field. What I would say is that the focus the government is placing on AI is a huge benefit. The fact that we have an OMB memo and executive order, what the administration is doing, everything is a step in the right direction. Not just here in the U.S., across the globe, there is an awareness of, “This is a technology to be reckoned with, and there must be guardrails.”
I think, in my perspective, what will happen is the role will follow how the evolution of the use of AI is happening. Some of that depends on various different policies, various different actions people will take, even in the government, in identifying the right use cases for AI. And I think one thing I would say is that every policy we set today we have to be ready to change the policy tomorrow. You should never think of AI as, “Okay, I got this impact assessment framework. My job is done.” No. Even when you implement an AI use case, you have to frequently validate its quality… At least on an annual basis, you have to retest your model to make sure that it's not introducing new biases, for example.
I think this role is going to be critical and central.I think organizations who build sound engagement around AI are going to be actually tremendously successful, because it needs multiple perspectives. I always say if AI is a piece of a puzzle, technology is one piece of this. There’s so many other pieces that all need to come together to be effective.
Nextgov/FCW: Did we miss anything?
Kuppa: This technology is unlike any we all have experienced in our lifetime, but it holds such tremendous potential, genuinely. Leveraging it to really increase the effectiveness of government services is a realistic dream, and should be pursued.
Editor's note: This article has been updated to correct transcription errors.