TSA's 'crawl, walk, run' approach to AI
Meet J. Matt Gilkeson, Chief Technology, Data and Artificial Intelligence Officer at the Transportation Security Administration
The Transportation Security Administration has been working in recent years to deploy enhanced security tools, but Chief Technology, Data and Artificial Intelligence Officer Matt Gilkeson said the agency is currently taking a “crawl, walk, run approach” when it comes to implementing new AI technologies.
Gilkeson said AI tools have the ability to improve the airport screening process, with the embrace of generative AI potentially allowing travelers to better understand and navigate TSA’s various policies. For now, however, the agency has largely been focused on training its workforce about the capabilities of the novel technologies and continuing to test AI tools for potential use.
Gilkeson spoke with Nextgov/FCW about his role with TSA and the agency’s work to adopt AI technologies. This interview has been edited for length and clarity.
Nextgov/FCW: Who do you report to in your organization? How many people are on your team? And what are your plans for growth?
Gilkeson: I got selected into the role of CTO and CDO earlier this year, and then within about a month of being in the role I was designated chief AI officer as well. So in that regard, I've kind of got the three hats, if you will. Folks always ask the question: How does that all interact? You know, is that extra work? In reality, I think the three really intersect nicely, because you cannot do the data pieces without the technology. You can't do AI without the technology and the data. So you have synchronization across those roles. And all three report to the TSA’s chief information officer, Yemi Oshinnaiye.
Specific to the AI role, we've got a couple of detailees. The Department of Homeland Security is hiring 50 AI Corps subject matter experts, and then they're going to be able to distribute those to the components. We're excited to be able to try and access a couple of those folks as well, and those are designated to artificial intelligence. It’ll probably be about four or five folks that are working in that space, but not necessarily permanent yet. There are other people inside the agency that are contributing to artificial intelligence that are matrix aligned to support those folks: our privacy, civil rights, civil liberties, testing and evaluation folks, and then some of our program management folks that are doing some of the technology development activities.
Nextgov/FCW: What is your role in things like AI acquisition and workforce development?
Gilkeson: The older, foundational machine learning models that perform a kind of analysis function, those tasks have been completed with artificial intelligence in the machine learning world for a number of years. TSA has had that space kind of operational for a while. I think it's the generative AI where we're really pioneering new ground. My team is responsible for reviewing the acquisitions that are happening as part of the IT acquisition review process.
[IT Acquisition Review], which is a mandate that the CIO performs under FITARA, is also looking at how the technologies’ use cases are developing. We're responsible for reporting those use cases to the department and working with them as we identify anything that's rights impacting to make sure that we're applying the extra oversight to those use cases
When it comes to our workforce, we're really excited to continue the development of the skill sets of the employees and trying to make sure that there's a baseline understanding of what artificial intelligence is. That includes our ‘Lets Talk About It’ series, which our Executive Assistant Administrator for Enterprise Support started a couple years ago. It's kind of like a TSA version of ‘Ask Me Anything,’ and it's a time to sit down with a given topic and go through it. We've held a couple of artificial intelligence sessions and they've been wildly popular, because folks are hungry to understand the concepts. I'm excited to be able to give them access to some of the details and information about how this technology is developing.
Some of our program management folks are doing some of the technology development activities. We’re looking at how we can build out a foundational skill set opportunity for employees. The work that we're doing at TSA to look at artificial intelligence is not designed to replace or or eliminate any jobs. We're looking at how we can augment and make the existing folks’ jobs easier and make it so that our security officers in the field, for example, are more effective at the security work that they're doing.
Nextgov/FCW: How is TSA looking at using AI to enhance its operations?
Gilkeson: We have a number of security detection algorithms that we've looked at to make our folks’ screening work a little bit easier. The emerging piece of artificial intelligence is generative AI, and so we're looking at how we can use those use cases with our customer service centers to look across a knowledge base of frequently asked questions. We're looking at how we can automate some of the contracting and procurement packages by looking at previous solicitations and asking a series of questions to our staff that are developing those packages to make it easier to develop a request. And we're looking at the opportunity within procedures as to how we might be able to use a foundational language model that does retrieval augmented generation on our procedures and policies, to be able to be an information source for understanding how the 3,000 policies that we have — from a regulatory standpoint — interact with each other, or how our complex standard operating procedures in the field operate.
Those things are going to allow us to be a little bit more effective and faster in the work that we're doing, and also allow our staff to focus on TSA’s core mission. And we're focused and honing in on responsible use. TSA, from day one of our creation with any technology we deploy, has always looked at the technology from a lens of function, safety, security, equity and accessibility. We're looking across our existing policies and authorities to make sure that when we do execute a test event, we are accounting for any additional kind of responsible use principles that we need to make sure are there. We've worked with DHS to look at what we can do from a funding perspective in the future to ensure that we've got additional funding for oversight, and we're coordinating with our department partners on the executive order responsibilities that we have for sector risk management and for training of the workforce as well.
Nextgov/FCW: What is your biggest priority right now?
Gilkeson: We've got ongoing work in the detection spaces. The administrator has been talking about that publicly for the past couple of years before the big generative AI push. Those existing efforts are going to continue at full steam, because there's a lot of opportunity that's enabled by and unlocked by some of the newer machine learning and artificial intelligence models that are available.
When it comes to some of the generative AI pieces, I think it's really important to continue to focus on the people and look at how we train the workforce and make sure that folks are ready to understand what this is and how this works. The more informed folks are, the more comfortable they are with using the technology.
The second part has been looking at policy and reporting and making sure that we have some of those pieces in place. And we’re working to get some actual kind of minimally viable products underway. So we've got a couple of things that we're working on inside TSA to try and look at having a secure foundational model.
What we're trying to do is take the crawl, walk, run approach. Let's start with a very safe, generative AI use case that we can do in a controlled environment, and then — as we get confidence in the testing information to say that the performance of those systems are within our guardrails and guidelines — then we can look at how we can try and expose those to maybe passengers or industry stakeholders as tools that they can start using.
You could envision a future state — once we've tested and have kind of confidence in the technology — a chatbot on the tsa.gov site that would allow you to ask questions about how to fly and how to prepare for flying, or what TSA policies are, or how to sign up for TSA PreCheck, for example.
Nextgov/FCW: What level of engagement have you had with the Department of Homeland Security about these efforts, and what guidance has that provided?
Gilkeson: DHS is kind of forward leaning in government. Our department CIO, Eric Hysen, has also been incredible in collaborating with the DHS secretary to not only establish the DHS AI Corps, but looking at how we bring the intersection of collaboration between policy, equity, accessibility and the Department of Justice together at the front end of this conversation. This helps ensure that, as they develop policy, it is in such a way that it enables technology adoption by the components and supports the components’ mission execution.
Secretary Mayorkas also released the AI roadmap earlier this year that has key initiatives with regard to some of the components’ use cases. There's been a large effort to try and make sure we collect and assess the use cases. In my time working at TSA, the collaboration with the department is probably one of the strongest it's ever been in this area.
There's probably nine or so different working groups in the department on the topic, but the leadership coordination group is one that's really interesting and exciting. There’s one person from every component and we meet biweekly with the CIO to cut right to the core of what we’re doing with safety, what we’re doing with AI Corps hiring, what we’re doing with sandbox development. It really has created kind of a synergy around all the components to make sure that we're all on the same page from an awareness perspective, but also pulling together in the direction to be able to get the policies and the testing in place.
Nextgov/FCW: How do you see your role evolving over time?
Gilkeson: TSA is approaching all of this from a responsible use perspective. We're looking at how we can take the testing that we've done for technology to date and continue to make sure that it's best of breed in terms of of how we test across the multiple different types of passengers that we have flying to maintain the accessibility, the equity and the security of the screening that we operate.
It's really important that we do that, and we're working with the department to try and do it in as transparent a way as possible.
One of our CIO’s strategic goals is to empower decisionmakers with data. And the intersection of my three roles is pivotal to being able to do that and move out on that objective.
TSA is in the process of updating and building a publicly releasable data strategy that's coming in the next couple of months, and we're looking at doing that to be able to provide awareness of where we're going from a data management perspective and from a stakeholder interaction perspective. We want to look at how we can share data with our stakeholders in the ecosystem to improve the security across the passengers’ journey or the cargo journey.