News

Coordinating Tactical Teams of Autonomous Machines

A team of robots navigates through the rubble-strewn scene of a remote natural disaster, looking for survivors. From a distance, another team provides situational awareness, alerting the first team to evolving hazards. As the search continues, the two teams dynamically swap roles to keep up with the emerging situation, with one team serving as lookout while the other carries out the rescue mission.

Like many scenarios once considered pure science fiction, this disaster response use case is plausible for multirobot teams in the not-too-distant future. However, implementing it in the real world presents some thorny technical problems. Researchers at the Johns Hopkins Applied Physics Laboratory (APL) in Laurel, Maryland, have made significant progress toward addressing some of the fundamental challenges that stand in the way of the vision becoming reality.

One of these foundational challenges is the sheer computational complexity of the problem. Each robot has to not only navigate an unpredictable environment and perform certain tasks but also coordinate its movements and actions with its team. On top of that, in a scenario like the one described above, both teams have to coordinate their movements and actions in tandem, in real time, and in response to a dynamic and fluid situation. As each additional robot contributes exponentially to the complexity of the calculations involved, it’s not hard to see how the problem quickly becomes intractable.

“Because of this combinatorial explosion that happens when you try to do both coordination and motion planning with multirobot teams, most approaches to this challenge have either solved each problem in isolation or else addressed both problems in the context of simplified, structured environments,” said Joseph Moore, chief scientist for the Robotics Group in APL’s Research and Exploratory Development Department. “Neither approach is sufficient if the end goal is equipping multirobot teams to succeed in real-world missions.”

Moore and his team are leading efforts to do just that by applying innovative techniques that can solve for both the coordination and the motion planning pieces of the puzzle.

A commonly used approach to robot pathfinding and multirobot collision avoidance is to represent environments using topological graphs. Rather than relying on precise geometric coordinates, a topological graph breaks the environment down into two parts: nodes, which are points that correspond to specific locations, and edges, which represent the paths connecting locations.

While this simplifies the environment greatly and allows for efficient computation, it still doesn’t scale well to multirobot teams coordinating their actions in real environments.

Moore and his team put a creative spin on the topological graph technique. They observed a relationship between a robot’s location on a graph and the characteristics of the graph itself. For example, if a robot is currently at point A and can navigate to either point B or point C, the relative risk of each choice is quantifiably different from the risks faced if the robot were navigating from point D.

In other words, there is a relationship between the robot’s current state and the relative risks associated with each possible action to take from that state, and these risks can be represented as “costs” for each action. And — crucially — those costs are quantifiable.

That insight led the team to the realization that they could encode the variability in risk into the graph itself. Rather than multiply the number of possible states that the navigation algorithm has to track, they encoded the cost of each possible action as a property of the graph’s edges using a technique called mixed-integer programming. The result is a dynamic set of graphs that change on the fly regardless of how many robots are involved, allowing multirobot teams to coordinate their actions without unmanageable complexity. As the situation evolves and robots change configurations, new graphs emerge to inform their next moves.

“As the robots traverse their environment, we’re able to capture those dynamics of the graph edges as constraints in an optimization problem,” Moore explained. “This prevents us from having to search over a larger space representing each robot and all its possible states.”

This ability to represent the problem in terms of constraints and costs makes their framework broadly applicable to a variety of scenarios and environments.

The team has gotten this framework — dubbed Stratified Topological Autonomy with Learned Contexts (STALC) — running effectively in simulations of real-world missions, handling the problem from high-level planning all the way down to the individual robot level. They also demonstrated their algorithms on live robots in an exercise held in Texas in June.

This work is funded by the Army Research Laboratory (ARL), and Christopher Korpela, who manages the Robotics and Autonomy program in APL’s Research and Exploratory Development Mission Area, noted ARL’s excellent partnership and innovative work in this domain.

“ARL is very focused on this kind of basic, foundational research for enabling future missions, just as we are here at APL,” Korpela said. “It’s very exciting to see breakthrough projects like this come to fruition. We anticipate more innovations that push the boundaries of multirobot teaming as a result of our collaboration moving forward.”

Related Work