News
Generative AI Wargaming Promises to Accelerate Mission Analysis
A team at the Johns Hopkins Applied Physics Laboratory (APL) in Laurel, Maryland, is creating an artificial intelligence-driven capability that automates much of the work that goes into designing, setting up, developing and running wargames. The effort holds promise to dramatically amplify the impact and value of wargames and similar exercises for the military and other government agencies.
Wargames, along with tabletop exercises and similar efforts where key stakeholders simulate potential missions in a gamified context, are powerful tools for exploring mission scenarios. They promote preparedness and strengthen communication and tacit understanding across agencies and organizations, and explore the potential impact of new technologies and capabilities.
To make these critical exercises more efficient, APL saw the potential for large language models (LLMs) to be incorporated into the Advanced Framework for Simulation, Integration and Modeling (AFSIM) — a tool commonly used by the military and the broader national defense community to model weapons platforms and simulate multi-domain conflicts and scenarios.
In a typical wargame, a human analyst writes the concept of operations for how different platforms might behave in different scenarios, codes it into AFSIM and examines the output to determine the best performers. APL tested whether an LLM could play the human’s role, engaging in a loop with a modeling and simulation construct. The answer was a resounding yes.
“This work is a major step forward in wargaming,” said Andrew Mara, head of APL’s National Security Analysis Department (NSAD). “Senior leaders in the Department of Defense have been seeking something like this for more than a decade, and I think we are finally at the point where the demand signal and the technology are in alignment. Pair that with an outstanding team here at APL and I think we have a chance to change the very nature of wargaming in the national security community.”
The team built a three-player scenario based on a hypothetical conflict, with two “blue” (friendly) players and a “red” (enemy) player, all of whom must make decisions about troop movements, technology deployment and other operational paths. Any player can be a human or an LLM. The combination of innovative AI and modeling and simulation tools could drive the planning and execution cycle of wargames down from months to as little as a few days. In fact, the team has already demonstrated that it can have a new military scenario up and running in less than two weeks.
The team is now working to build out a capability that will provide many of the advantages of wargaming in a more readily accessible format.
“Our long-term vision is to create a tool that allows decision-makers at many different levels to arrive at the kind of insights that come from wargaming in a form that lives on their personal computers and allows them to run a lot of repetitions in a short period of time, instead of a full-blown exercise that takes months of planning,” said Kevin Mather, an analyst in NSAD who is leading this work. “Of course, there are advantages that the players get from in-person exercises that you don’t get from a purely digital format, but the idea is to provide more options.”
Beyond Human Reasoning
The team is also working to make the actions of players — both human and AI — more understandable for outside observers and analysts, by building in an ability for the platform to interpret player actions and analyze the motivations behind them. To do so, they’re leveraging another APL effort that is creating an AI assistant for human fighter pilots.
“The AI co-pilot has to form a model of what the pilot is trying to accomplish in any given situation and the reasoning behind the pilot’s decisions,” Mather explained. “We’re trying to apply that same principle to wargaming to enhance the stakeholders’ ability to evaluate strategy.”
That ability may eventually allow AI to formulate strategies that may not have occurred to human agents, said Bob Chalmers, who leads the Algorithmic Warfare Analysis Section at APL and is technical lead of this effort.
“In the field, a commander might ask his officers to come up with three different plans, and they will collectively ‘wargame out’ the plans,” Chalmers said. “You can imagine an AI playing that role — but over thousands of variations instead of a few. Explainability will be a critical part of that new relationship, to give our future commanders the appropriate confidence in their new AI subordinates that their suggestions are rooted in sound reasoning and assumptions.”
The capabilities will be demonstrated at APL’s ADEC conference on Sept. 9 and 10.