Special operators hope AI can reduce civilian deaths in combat

Christopher Maier, the assistant secretary of defense for special operations and low-intensity conflict, speaks with U.S. Special Operations Command Africa troops at Camp Lemonnier, Djibouti, July 23, 2023.

Christopher Maier, the assistant secretary of defense for special operations and low-intensity conflict, speaks with U.S. Special Operations Command Africa troops at Camp Lemonnier, Djibouti, July 23, 2023. U.S. Air Force / Airman 1st Class Natalie Vandergriff

Automation could eventually turn “trigger-pullers into the experts that can do this,” one official said.

While much has been said about the danger of allowing AI into military operations in a way that would allow AI to kill people, there has been far less discussion about using AI to make war safer for civilians. But that's what U.S. special operations are starting to look at now, Christopher Maier, the assistant defense secretary for special operations and low-intensity conflict, told reporters Friday.

Part of the reason for this, Maier said, is that preventing civilian harm in a large-scale conflict—such as a potential war with China—would be far more difficult than in the counter-terrorism missions that special forces are engaged in around the globe.

“As we've started to exercise this and build the emphasis on [reducing] civilian harm into large-scale exercises, it becomes particularly daunting when you think of the, if you will, the scale of that type of conflict, where…we've talked openly about thousands of strikes in an hour. This boggles the mind,” he told the Defense Writers Group. 

U.S. special operations forces are “going to need the automation and aspects of artificial intelligence and machine learning and all those things that we talk about all the time on the targeting side and the operational side…built in and baked into that with a focus on civilian harm.”

The Defense Department is already doing a lot of work to reduce the number of civilian deaths, particularly in special operations mission sets, he said. One example is the new center of excellence dedicated to civilian protection in combat. 

“It also means things that are critically important but not particularly glamorous, like having actually a data enterprise that can ingest a lot of different information and make it available to others so that they can look at the lessons learned of the past,” he said. 

So how achievable is it to use AI to reduce civilian harm in conflict? 

A 2021 International Red Cross report looked at some of the areas where AI, particularly tied to more precise targeting and better battlefield data analysis, could make conflict safer for civilians and non-combatants. Such systems “may enable better decisions by humans in conducting hostilities in compliance with international humanitarian law and minimizing risks for civilians by facilitating quicker and more widespread collection and analysis of available information,” it says. 

But the report also reveals a variety of features AI will bring to the battlefield that commanders could find attractive—but that could also undermine efforts to protect civilians and possibly “facilitate worse decisions or violations of international humanitarian law and exacerbate risks for civilians, especially given the current limitations of the technology, such as unpredictability, lack of explainability and bias.”

AI could also lead to what the Red Cross called the “increasing personalization of warfare,” with digital systems bringing together personally identifiable information from multiple sources—including sensors, communications, databases, social media, and biometric data—to form an algorithmically generated determination about a person, their status and their targetability, or to predict their future actions.”

That may have already come to pass. In April, the Israeli-based magazine +972, citing a number of sources within the Israeli military, detailed the existence of an AI tool called “Lavender,” designed to designate suspected Hamas and Palestinian Islamic Jihad fighters. According to the magazine “During the early stages of the war, the army gave sweeping approval for officers to adopt Lavender’s kill lists, with no requirement to thoroughly check why the machine made those choices or to examine the raw intelligence data on which they were based.”

Bottom line: The use of AI in warfare to prevent civilian harm is only as good as the human-specified parameters guiding it. And those parameters will reflect the intentions and priorities of the government using the system. 

Still, when well-applied, AI can have positive effects on civilian harm reduction, according to a 2022 paper from CNA. 

For instance: “Detecting a change from [the] collateral damage estimate by finding differences

between imagery used to determine the collateral damage estimate and more recent

imagery taken in support of an engagement,” and “alerting the presence of transient civilians by using object identification to automatically monitor for additional individuals around the target area and send an alert if they are detected.” 

In other words, AI could play a crucial role in reducing uncertainty about targets, which could help commanders to better identify which targets to shoot—and which not to. 

Of course, the CNA paper reminds, there are reasonable limits, since AI runs on data and not all data is perfect at the moment it’s acted upon.  

“Even a perfect world—one with few or no uncertainties, with clear demarcations between ‘hostile’ and ‘nonhostile,’ and in which targeting areas (and concomitant weapon blast zones) that preclude any reasonable likelihood of collateral damage are all easily identifiable—will have a non-zero risk to civilians.”

Giving special operations forces better tools to prevent civilian casualties is part of a broader series of transformations Maier says are essential to better compete against China on the global stage, transformations special operations forces will have to make even as they face budgetary constraints and even cuts. 

For instance, the Army is looking to cut as many as 3,000 special operators. Army officials speaking on background to Defense One in September emphasized the cuts would affect non-tactical roles, or so-called enablers such as headquarters staff, logistics, and psychological operations.

However, Maier said those types of enabler roles are precisely what  U.S. Special Operations Forces must invest in to compete with China. 

“If you've got a operation detachment alpha, the kind of core, 12 man-Green Beret team, they're going to have to go out and understand how to do cyber and get in a beam for a potential adversary satellite, and understand how to operate in the environment of ubiquitous technical surveillance, just as much as they're going to have to be able to—10 times out of 10—hit the target they intend to hit if they're going kinetic,” he said. “My fundamental view is the areas we need to invest the most are going to be in those critical enablers. In some cases, that's turning trigger pullers into the experts that can do this.”