Can battlefield drones spot threats to troops?
Army Lt. Col. Philip Root, the acting deputy director for DARPA’s Tactical Technology Office, explains how the DOD plans to operationalize autonomous systems, forever changing the human-machine relationship.
The Defense Department is rapidly and unabashedly bringing integrating artificial intelligence into its ecosystem -- from back-office functions to the battlefield. But with increased public scrutiny over human interactions with AI-enhanced machines, tactical projects have an extra load to bear.
Enter URSA, the Urban Reconnaissance through Supervised Autonomy project, a program that has become notorious for weaving legal, moral, and ethical concerns into its foundation. It aims to use sensors, AI and drones to distinguish between threats and noncombatants.
URSA is working four companies during the first phase: Draper Laboratory, Scientific Systems Co., SRI International, and Soar Technology. Contracts were awarded in December and January for a combined $22.6 million, according to the broad agency announcement.
FCW talked with Army Lt. Col. Philip Root, the acting deputy director for the Defense Advanced Research Projects Agency’s Tactical Technology Office, to get an update on how DOD plans to operationalize autonomy and forge relationships between humans and machines.
This interview was edited and condensed for clarity.
FCW: So what’s the URSA pitch?
Root: URSA, which just started Phase 1 this year with four performers, takes a different look at the vexing problem of discriminating hostile and non-hostile [individuals] in urban operations. We want to provide more awareness so when a soldier or Marine encounters an individual … they have more information about that individual's intentions as he comes into view. It may seem crazy to aspire to have that level of discrimination, but at traffic control points we do the same thing.
For example, with a van speeding toward a traffic control point at 55 mph, soldiers have 15 seconds from the time they see that van to the time it could explode. So in 15 seconds, a soldier has to identify whether that’s a van full of explosives or a van full of kids.
FCW: How do you even do that?
Root: It’s amazing, right? So from the outside, it's the same. We have to get inside the driver’s head to understand intent. We do that by putting signs out, a stop sign. If they speed by the stop sign, that’s information. So we’re putting out a sign, a probe, to tell the target, someone we’re watching, to stop. And then we give them another sign -- send out a flare or fire warning shots depending on rules of engagement -- to insert a message. And how they respond is more information.
A van full of kids that blows through that stop sign doesn’t mean they are a target. If they blow by several, it doesn’t mean they are a target. But at some point we say, "You’ve failed a number of tests here."
You can look at URSA as finding targets, but I don’t like that view. I prefer the view of ensuring that noncombatants can get out of this scene, a van full of soccer kids turns around. Fantastic! We don’t want you around; we want to give you awareness that this isn’t a good day to be outside.
So as a military patrol is moving through a city, we’d love to let everyone know in advance. But they can’t all just leave. We have to operate with non-combatants around and provide them every opportunity to remove themselves from the environment. Anyone left would then have hostile intent.
We might send a message via drone for instance, and say today is not a good day to be outside. We recommend you go to the nearest building.
FCW: So a drone just comes down and starts talking?
Root: Could be. We just started, so I don’t presume to know. It could come down and say, "U.S. forces are approaching. Not a good day to be outside." Anyone who stays outside might have a really good reason to be outside; it doesn’t mean they’re hostile in any way.
Could be they didn’t hear us, are deaf, it’s noisy out -- so we have to seek a different method. Maybe we put a laser on the ground to confirm they’re seeing it. Perhaps we play a popping sound and combatants and non-combatants respond differently. But at no point is the autonomy doing this on its own.
We just want to collect as much information so if someone with non-hostile intent wanders into a U.S. patrol, we can provide a folder of information before a soldier takes their finger out of the trigger well. Nobody wants to be in the situation where a soldier and a non-combatant come in contact and both are surprised.
FCW: There’s such a personal and emotional component to this. Do you have a suite of people working on this -- psychologists, behaviorists?
Root: We have a team of behavioral psychologists and social science models of how people respond. But, unfortunately, there’s not a whole lot of data into these types of drone interactions. Nobody’s tried this. We’re going to watch social science develop at the same time as the AI and machine learning. And I’m not convinced that it’s going to work. But I’m convinced someone should be trying so we can take these lessons learned and apply it to whatever comes next.
We have to be committed to this problem. We can’t shirk away from it, because the outcome is far more perilous with the current problem -- where soldiers and non-combatants are put in harm’s way.
One lesson that we’ve learned is that under a real interrogation, suspects that are angry are often those that are innocent because they’re so mad they’re caught up in this. To your point, if someone is having a bad day and a drone gets in their face, they might throw a rock at it. We have to understand and factor that in. It might mean that we’re terrorizing the population. We could absolutely make the situation worse; we’re very sensitive about it.
FCW: Have you started designing it? I’m thinking that would also have an impact on how people interact with the technology.
Root: If you’ve ever had a drone buzzing in your face, I’m not sure there’s a way to react other than with anger. All of that’s real. A ground robot that crawls up to you, into your personal space is not going to elicit a positive response. But at some point, that is appropriate to let people know we’re serious. There’s a spectrum of probing and solicitations, but we have to start with some suspicion and shouldn’t terrorize people who are not suspicious.
We’ve started the legal, moral, ethical considerations before we even awarded the contracts so we could get ahead of this. A panel of lawyers, ethicists, philosophers and academics meets quarterly to provide written technical guidance.
Harm has many forms. Clearly, warning shots have a greater potential for harm than just a message. Hopefully, warning shots are never necessary, but we have to understand this spectrum of the possibility of harm. It’s never been done -- only theorized. We have to reduce that theory to practice. Because the performers just want a number. When we design AI, it’s just math.
Is it going to be right? Absolutely not, but it’ll be better than the nothing we have now.
FCW: And then there are the cultural differences between allies and subjects that also have to be considered.
Root: Absolutely.
FCW: So what are the milestones for URSA in the next 18 months?
Root: The first phase is 18 months, so at the end we’ll have a demonstration from each of the performers on their approaches and have a down select into Phase 2 to about two performers.
We expect experiments from each in the meantime, of whatever they have ready, in simulation, augmented reality and field experimentation, culminating with real social science experiments with robotics.
NEXT STORY: Your agency isn’t ready for AI