Bridging security and productivity at Energy
Meet the Department of Energy's CAIO Helena Fu and CIO Ann Dunkin. They're looking to support employees with AI tools while also keeping an eye out for what the emerging tech means for their national security missions.
Out of over 400 federal agencies and offices, few entities seem better equipped to implement artificial intelligence solutions internally than the Department of Energy. As the agency tasked with overseeing the expansive U.S. national laboratory network, leadership has access to the research and development to drive both AI policy and implementation.
As a result, the agency has a dual perspective on the emerging technology that looks to both expand on AI’s possibilities and consider how it would operate in the organization internally. Likewise, Energy’s director of the Office of Critical and Emerging Technologies and chief AI officer, Helena Fu, and its chief information officer, Ann Dunkin, work closely on the agency’s overall approach to AI.
This interview has been edited for length and clarity.
Nextgov/FCW: What is Energy up to with AI? What are you guys working on?
Fu: I think one of the the really interesting things about DOE is that we are a large R&D organization, and and I think the thing that is a little bit different about DOE is that we not only are large, but that we span so many different mission spaces, from both the open scientific discovery to the applied energy space to our national security mission. And so I think that is something that really is important within the context of a dual use technology like AI, where we have the ability to see what the capabilities are like in the open, to do additional work on the national security side, and have those two things inform each other as two sides of the same coin.
It is also very interesting, within the sort of CAIO-type context, that DOE is doing quite a lot of research in AI and how it relates to our mission space, as well as thinking a lot as really leading the charge here on how AI can be used within the department. So these are two sides of the same point that we really think are complementary here.
Dunkin: From the operational side — it's not as sexy and exciting as what happens on the R&D side — but it's important for DOE’s competitiveness going forward. We need to be able to deliver the capabilities that will allow our employees to be as productive as possible. We've been working across the enterprise to really understand what that policy looks like. We put out some guidance— we published [revision] one privately, but then we published [revision] two of that guidance publicly — that gives people some guardrails: How do you use AI? How do you keep DOE data safe? How do you use it effectively? That sort of thing.
We've got folks all over the place in DOE, literally, because our labs are so forward leaning, creating capabilities around sandboxes, right? Here's a place where folks can play in a safe environment. We're working on setting some stuff up at headquarters. We have various labs across DOE who've done that, and hopefully we'll have some more news in that space in the near future about how we're pushing that stuff out. But we're trying to sort of be cautious and make sure we do that safely, which is the next point. We need to make sure we protect our assets. We need to protect our assets from AI, as well, because we have a lot of people who want to get DOE’s data, who are going to use AI as part of their efforts. So we need to understand how we can use cybersecurity to defend DOE’s resources as well.
So it's a productivity tool that we're trying to roll out, and that's slow because, for example, one thing people are most excited about is Microsoft Copilot. Copilot is not even available in the government cloud yet; it's not FedRAMP-ed yet. So we're waiting for that; we're working with Microsoft on that. But also, in addition to the productivity, it's the protection of DOE as well. And then I will just add third, you know, thirdly, that we have built some productivity apps using AI. We built an AI chat bot, for example, and some other things. So that's sort of the IT side of the picture.
Nextgov/FCW: And what I’m hearing is securing data and cultivating trust seem like Energy’s biggest challenge and priority area. But do you have any others?
Fu: I would add, in addition to securing data, it is also about how we use it usefully.I think one of the things that we're really excited about as part of our [Frontiers in Artificial Intelligence for Science, Security and Technology] initiative is how we utilize all of the scientific data that we have across the user facilities across the country. And this is something that is both a huge opportunity, as well as we do need to figure out how we manage the risk. And again, I think pointing to the dual sided nature of DOE as a place where we can do that.
And in addition to the securing of the data and how we use it, and the trustworthiness of the models themselves, it is, on our side, thinking about how we can apply them to actually solve things at the end of the day. This is only going to be as useful as how we're actually able to use it. And so whether it's to, in the research space, try to solve mission challenges, or even internally, how do we actually use it? How is the workforce going to use this to its maximum benefit?
Dunkin: DOE is uniquely positioned. We've got this tremendous asset of our high performance computing; we've got leadership computing; we've also got a number of other smaller supercomputers. And when you combine that computing capacity along with AI, we can then revisit some of the models we've been developing over the years and add AI capabilities to those models that we've built to do a better job of analyzing things. You know, many of our challenges are so data intensive that that's what drives building many of our big supercomputers. We build some of them to get better at supercomputing, but we build some of them simply to have capacity to solve really hard problems, like fusion. The more capability we have, the more we are able to understand our problems, model them, and solve them. And AI is another component that lets us do that better. So I think that's a huge benefit within DOE, and we want to make sure, from our side of that, we are able to enable that capacity to be built and that we can secure that capacity.
Nextgov/FCW: In the chief AI role, how do you see that evolving over time?
Fu: You know the decision by our leadership to make me the CAIO but not within the CIO shop, I will say that for a department as large as DOE with, you know, the broad mission space that we talked about, it was really a policy decision to have someone focused on the entirety of DOE on AI. I work so closely with Anne's team. We have very, very regular connectivity, and we need that because the capabilities that we want to be able to build are going to be nested on top of what we have internally within the department as our policy. I will say one thing that has been really helpful from my perspective, is the ability to bring alongside the labs and the research side into the conversation when we talk about what the department is positioning on as it relates to AI and all of the things that we want to be able to do with this technology, as well as how do we manage those risks?
So I think my office, the Office of Critical Emerging Technologies at DOE, is really focused on how you find that balance across this range of technology spaces and in areas where, I think, to the general public, people may not realize the kinds of capabilities that DOE has or enables with other agency missions.
Dunkin: There's power in this having two organizations taking two different perspectives.We are looking at, how do we use this AI? The Office of critical emerging technologies is working across enterprise, to look at how we advance AI and how we use AI in our science mission. And I think that that collaboration we have between us, it makes us much more powerful than if one of us was trying to do both of those things.