Adding generative AI to wargame training can improve realism, but not without risk
The next big move may be to make military simulations smarter, specifically by adding advanced AI so that the simulated adversaries can offer better challenges.
“Shall we play a game?” was the chilling question asked by the artificial intelligence controlling the country’s nuclear arsenal in the 1983 classic WarGames movie. The film offered a thrilling, if not quite realistic, look at one potential danger of relying too heavily on AI when it comes to military training and wargaming.
In reality, there are many advantages to using realistically simulated battlefield environments to train both frontline soldiers and military decision-makers, not the least of which is the ability to provide a level of combat experience without any actual risk. And while a key focus thus far for military simulations has been increasing realism and graphical fidelity — making the war games look much more like real life — that goal has largely been achieved by the latest generation of simulations. Those training in virtual reality military simulations experience and see almost everything they would in real life, from volumetric clouds to three dimensional blades of grass, not to mention all of the latest military hardware and weapons.
The next big move may be to make military simulations smarter, specifically by adding advanced AI so that the simulated adversaries can offer better challenges. That would line up with the DOD AI Adoption Strategy, which was updated late last year. The new strategy basically calls for AI to be added everywhere within the DOD where doing so can provide the military with an advantage over potential adversaries, which would naturally include VR training simulations.
"As we focused on integrating AI into our operations responsibly and at speed, our main reason for doing so has been straightforward: because it improves our decision advantage," Deputy Defense Secretary Kathleen Hicks said while unveiling the updated strategy at the Pentagon. "From the standpoint of deterring and defending against aggression, AI-enabled systems can help accelerate the speed of commanders' decisions and improve the quality and accuracy of those decisions, which can be decisive in deterring a fight and winning in a fight."
Taking advanced AI to wargaming and training
When it comes to virtual reality military simulations, one of the most popular programs is called Virtual Battle Space 4 from developer Bohemia Interactive Simulations — or BISim. Used by the U.S. Army, the U.S. Marine Corps, the Canadian Armed Forces and more than 60 other allied counties, VBS4 can realistically simulate battlefields anywhere in the world.
Nextgov/FCW sat down with BISim Chief Commercial Officer Pete Morrison to talk about the advantages of integrating AI into military simulations, and also some of the surprising risks and hidden dangers that such a merging of technologies might pose.
Nextgov/FCW: Before we dive into the use of AI in battlefield simulations, when we last talked, the war in Ukraine was just heating up, and militaries around the world were seeing how the use of drones rapidly changed battlefield tactics. Is that still the case?
Morrison: Drones are a game-changing technology with a definite impact. Their portability, ease of use and versatility in taking on diverse operational roles are allowing drones to fundamentally reshape the dynamics of battlefield engagement in a manner not witnessed since earlier revolutionary military innovations changed the conduct of warfare. Much as tanks, helicopters and maneuver tactics evolved battlefield capabilities in their day, these newly emergent tactical drone swarms portend a similar transformation in how armed forces fight.
We’re seeing drones acting as a force multiplier in Ukraine, suggesting they’re playing a significant role there, as they’re deployed for critical functions like providing real-time situational awareness of battlefield conditions, carrying out offensive strikes by dropping small munitions and directing precise artillery fire onto targets.
Other countries exploring the use of drones include China, which could potentially use autonomous drones to wreak havoc as part of an invasion strategy against Taiwan. India is developing indigenous military drone capabilities, including collaborating drone swarms. Turkey has developed advanced drones like the Kargu-2 that are able to swarm targets and are equipped with autonomous capabilities. Israel is also a major drone exporter and developer, with advanced systems like the Harpy that can autonomously loiter and independently identify or attack radar emissions.
Nextgov/FCW: And again, the last time we spoke, you were incorporating drone technology into the simulated environment of VBS4. Is that effort still ongoing?
Morrison: Simulation environments like those created by BISim can provide a safe way to test AI-controlled or autonomous systems before deploying them into the real world. These simulations allow us to model different scenarios to evaluate performance.
At a high level, simulations and war games can model autonomous systems through mathematical aggregates. But at a tactical level, accurate modeling of factors like wind, visibility and performance is critical to avoid negative training effects. The goal is to provide positive training benefits that translate to real-world scenarios while avoiding false confidence in systems that could prove unreliable in battlefield conditions. Careful testing is essential.
There’s an ongoing ethical debate about autonomous weapons like drones and AI-powered systems that can decide to use lethal force without human input. There is a need to make decisions around their use and test them thoroughly. Militaries like the U.S. have policies against full autonomy, but adversaries may not show the same restraint, which is likely one of the main factors driving those working with BISim to request that we integrate drones into military training simulations.
Nextgov/FCW: Thank you for updating us about that. In terms of AI, the Alan Turing Institute recently released a study that showed how the use of artificial intelligence could assist in wargaming types of simulations. How much AI is currently employed in a simulation like VBS4?
Morrison: VBS4 is primarily a virtual simulation, meaning humans control avatars within the training game. VBS4 supports tactical AI that executes carefully developed military behaviors that support the training needs of our users. The scenario developers add the needed AI to their scenarios. Still, at the tactical level there is no opponent that controls all the actions of the AI. In tactical simulations, our customers expect predictable, repeatable results.
However, we are working with our parent company, BAE Systems, on a different use case: computer wargaming. VBS4 is a tactical simulation that enables cognitive “how to think” type training. Wargaming has very different uses from command and staff training to modeling large-scale, theatre-wide conflicts. At that scale, AI is extremely important because there are likely too many individual units for humans to control effectively, and we might need to do hundreds or even thousands of runs of a scenario to determine the most likely outcomes.
Working with BAE, we have integrated VBS4 with products like LG-RAID to meet these challenges. LG-RAID uses Linguistic Geometry to determine possible courses of action in complex, large-scale military scenarios, which can then be simulated at high fidelity in VBS4. This is a unique approach to wargaming, and we are excited to work with BAE in this area. In this way, we can simulate the tactical environment right up to the strategic level using tightly integrated simulations that run simultaneously with different levels of fidelity.
Nextgov/FCW: What about the new generative AIs? How can these more advanced AIs, which are capable of creating their own content and taking independent actions, help to advance VR and simulated military training?
Morrison: Generative AI could allow for more realistic and dynamic virtual environments and computer-controlled entities. Rather than scripted behaviors, generative AI models could make virtual opponents and enemies seem more adaptive, unpredictable and human-like.
Traditionally, simulations have represented enemies with pre-programmed responses or limited decision-making abilities. Future models could even potentially spawn diverse personalities, dialogues and behaviors similar to non-player characters — NPCs — controlled by the dungeon masters of a Dungeons and Dragons game. Generative AI’s creation of enemy forces exhibiting human-like intelligence and ability to adapt tactics in response to the actions of a trainee and the evolving situation will allow for more realistic and unpredictable engagements, preparing soldiers more effectively for the complexities of real-world combat.
Nextgov/FCW: That sounds really amazing. But what aspect of generative AI are you the most excited about when it comes to military simulations and training?
Morrison: The most interesting form of generative AI would be one that can examine a battlefield situation and determine the best course of action for commanders — either in training or in the field. However, we need to train the models for this use case, and this will be an incredibly difficult and expensive task.
The amount of data needed to train the AI will be immense and the modern battlespace is incredibly dynamic and theatres vary widely — consider a counter-insurgency in Iraq compared with a conventional war like in Ukraine. The tactics and weapons being used are completely different. So while Generative AI could be used like this in the future, we don’t know when such a capability will become available.
Initiatives like U.S. Army Synthetic Training Environment could theoretically provide relevant data as it’s rolled out over the next five to 10 years, but the resources required to train the AI will be very expensive. ChatGPT had access to the entire Internet to train its AI, and thousands of humans were continuously correcting its responses to make it more accurate over a long period of time. A similar investment will likely be needed to train Generative AI that can accurately advise commanders on decision-making.
Nextgov/FCW: Are there also any risks when adding more advanced AI into a military training environment that could have negative consequences?
Morrison: The current AI used by military simulations are explainable — they typically use behavior trees that are hand-crafted to deliver reliable and repeatable results. New approaches like LG RAID still deliver AI that is explainable — at any point you can interrogate the AI to understand why it made the decision it did.
Understanding how generative AI models create scenarios and make decisions is a requirement. Explainable AI techniques can help increase these models transparency and trustworthiness. Overall, generative AI holds immense potential to revolutionize military simulations and virtual training, but addressing these challenges is crucial to ensure its responsible, effective implementation in training soldiers for the complexities of modern warfare.
The black box nature of generative AI is certainly a concern for use in military simulation — if it is not possible to understand why the AI makes a decision, it will be difficult to rely on it for training real soldiers and commanders. Potentially hidden concerns where AI’s black box nature might negatively impact military training simulations could include situations like unexplainable or erroneous AI behaviors. If an AI-powered simulation produces unexpected or unrealistic results, it may be difficult to understand why without full transparency into the model. The primary mitigation strategy will be to build some measure of explainability into the model — further adding to the cost of training the generative AI I in the first place.
Nextgov/FCW: Interesting. Are there any other potential dangers for integrating too much AI into military training?
Morrison: Simulations are wonderful, cost-saving measures. However, if trainees become overly reliant on flawless AI performance rather than building resilience, they may struggle to handle real-world variability. Militaries must augment simulations with rigorous real-world training. Also, validating performances after AI simulation integration could help overcome a potential overreliance on AI.
Finally, simulation training can help leaders develop skills like understanding and translating strategic goals into actionable plans and adapting them to changing conditions, or the ability to make quick, informed decisions under pressure — often with limited information. However, if AI is used too prescriptively for modeling an opponent’s strategy and tactics, it could theoretically erode creative thinking or adaptation. But maintaining personnel oversight for critical scenario injections could help circumvent that potential issue.
John Breeden II is an award-winning journalist and reviewer with over 20 years of experience covering technology. He is the CEO of the Tech Writers Bureau, a group that creates technological thought leadership content for organizations of all sizes. Twitter: @LabGuys
NEXT STORY: Commerce announces AI safety consortium