Military Drone Testing Could Lead to Future Deployments and Roles
Theoretically, autonomous vehicles and drones could play a decisive role on the modern battlefield.
Drones, unmanned vehicles and autonomous robots of all shapes and sizes are starting to make big strides into government service. They help local governments with search and rescue, fight COVID-19 infections at public venues and even serve as explorers on other planets. Almost nobody has a problem with autonomous or semi-autonomous vehicles taking on those roles. But when it comes to military service, it’s a little bit more of a complicated issue.
Theoretically, autonomous vehicles and drones could play a decisive role on the modern battlefield. They could do everything from scouting the landscape before an attack to evacuating wounded soldiers. And they could also be tasked with going into actual combat against enemy troops or vehicles.
The British Army recently held a massive wargame that involved thousands of drones and vehicles with varying levels of autonomy. Many different types of drones were tested, including groups operating in a swarm, and those that could transition from flying to swimming and back to flying. Individual soldiers were able to send orders or make requests of different autonomous vehicles or drones during the exercise to test out how a human and robotic partnership might operate on the battlefields of the future.
Exercises like the British one undoubtedly generate a lot of information about the strengths and weaknesses of autonomous vehicles in a combat environment. But they are enormous undertakings both in terms of time and resources. A much more economical, and arguably practical, approach would be to add autonomous vehicles into a simulation that could run near constant testing on their responses, intelligence and behaviors.
Bohemia Interactive Simulations (BISim) is a leader in this area. The company provides simulation training not just for various branches of the United States military, but also for countries around the world. Recently their customers have started to ask for autonomous vehicles to become a part of those simulations. Including smart robots in realistic virtual wargames would let leaders observe how an autonomous vehicle might act under fire, or refine better strategies for having humans and robots work together to accomplish their missions.
We talked with Senior Vice President of Product Management for BISim Oli Arup about the potential role of autonomous vehicles in combat, how the effectiveness of drones could be improved and what roles thinking machines might play in both the near term and distant future of the world’s most advanced militaries.
Nextgov: From what your customers are telling you, what is driving the desire for drones and autonomous vehicles in the world’s militaries?
Arup: The ability to add autonomous vehicles allows militaries to significantly increase capabilities without the need for more personnel at a time where active military service members are reducing worldwide. Autonomous vehicles provide several benefits such as freeing up existing personnel from support and logistics tasks, acting as force multipliers to frontline units, reducing frontline units exposure to danger and improving battlefield intelligence.
Nextgov: Are there any other factors driving drone use?
Arup: Commercial sector developments and financial investment are also driving adoption with things like cheap but high-resolution sensors and cheaper drones. And another factor driving interest is the whole concept of swarms of drones using autonomous swarm artificial intelligence behaviors to create never-before-seen defensive and offensive capabilities.
Nextgov: Some drone swarms were recently tested in real life during the big United Kingdom military operation. Could your Virtual Battlespace 4 (VBS4) simulation software, which was recently featured in Nextgov, help the U.S. military test those kinds of autonomous devices without having to stage a major operation in the real world?
Arup: VBS supports the creation and integration of a wide range of technologies and could simply add support for drone swarms. The question of ensuring that the simulated vehicles behave like the real ones can be complex, however. The simplest way is to use the real vehicle system and integrate it into the simulated environment rather than trying to rebuild and replicate an existing behavior.
BISim has already demonstrated the integration of a real autonomous vehicle AI system into VBS. The AI was being fed emulated visual and sensor feeds directly from the VBS virtual environment. Given the complexity and accuracy of the virtual environment feed, the AI was able to interpret the data as if it were the real world and use this data to control a virtual representation of the real platform. This represents an exciting concept where an autonomous vehicle AI can be validated in a simulated environment where the complexity of any potential battlefield environment can be replicated at any scale.
Nextgov: We have seen the addition of autonomous robots like the THeMIS UGV unmanned ground vehicles that are being used by militaries around the world into VBS4. Is anyone also adding potential future robotic technology into the simulation to see how it might affect or interact with human soldiers?
Arup: The U.K. Ministry of Defense have been using VBS as an environment to look into this exact question, examining how future autonomous vehicles fulfilling multiple roles can be used as part of a traditional force structure. The focus of much of the work done by our customers is not centered on the technical capability of the vehicles themselves but rather the practicalities of issuing orders to such vehicles and, importantly, the potential increases in cognitive load on any humans in the loop. There are a number of other countries we know of doing similar analyses.
Nextgov: Based on what you are seeing is possible inside the simulations, what do you think the future of autonomous vehicles and robots serving in military roles will look like?
Arup: In the short term, autonomous vehicles will likely fulfill tasks such as last mile resupply, providing front-line logistics support to troops and intelligence gathering. Realistically, the advancements in artificial intelligence mean there is potential for autonomous vehicles throughout all aspects of the military with the main obstacles likely to be organizational, legal or moral rather than technical.
In the longer term, assuming these obstacles can be resolved, there is likely to be proliferation of autonomous munitions and the use of swarms of low-cost air vehicles used for both kinetic and [intelligence, surveillance, and reconnaissance] tasks. There is also likely to be a drive to swap autonomous vehicles into normal force structures such that a tank commander, for example, would be commanding largely autonomous subordinates rather than human crewed vehicles. As the technology improves, the ratio of autonomous vehicles to human crewed vehicles will alter in favor of more autonomous vehicles. This will likely be the case across all domains and will have a huge impact on tactics.
Nextgov: You paint an interesting picture of the future battlefield. As autonomous vehicles take on more of a role in modern militaries, what do think about having robots with full autonomy able to attack or maybe even kill? That seems to be one area that most people struggle with.
Arup: The prospect of having fully autonomous armed vehicles is rightly a very serious subject. It is hard to imagine any near future where there isn’t a human in the loop to validate the AI decision process. However, it is exactly this kind of question that a simulation environment can help answer without the risk of letting armed AI loose in the real world.
Nextgov: Will simulations like the kind used to train human soldiers today continue to have an important role as more autonomous vehicles and weapons systems come online?
Arup: As the technology becomes more widespread, the ability to test the capability of systems in a safe simulated environment, we believe, becomes vital. Conceptually, it is the same logic as autonomous car software being trained through machine learning in virtual environments but obviously the consequences for an error in a weapon system are far more serious—not just injuries and deaths but also the impacts of such events.
Currently, there are limited training environments where even semi-autonomous systems can be used and even fewer where fully autonomous vehicles could conceivably be deployed with live weapons as part of a full training scenario. Virtual environments offer the ability to train AI models in a complex digital twin where the AI can encounter the huge complexity of any military engagement.
Just as you wouldn’t want a human soldier to encounter the complexity and stress of the battlefield for the first time during active warfare, similarly you wouldn’t want to trust an AI system in active warfare if its behavior has never been validated in a virtual environment.
John Breeden II is an award-winning journalist and reviewer with over 20 years of experience covering technology. He is the CEO of the Tech Writers Bureau, a group that creates technological thought leadership content for organizations of all sizes. Twitter: @LabGuys