Multi-robot planning with conflicts and synergies Academic Article uri icon

abstract

  • © 2019, Springer Science+Business Media, LLC, part of Springer Nature. Multi-robot planning (mrp) aims at computing plans, each in the form of a sequence of actions, for a team of robots to achieve their individual goals, while minimizing overall cost. Solving mrp problems requires modeling limited domain resources (e.g., corridors that allow at most one robot at a time), and the possibility of action synergy (e.g., multiple robots going through a door after a single door-opening action). Optimally solving mrp problems is hard as it is a generalization of the single agent planning domain which is known to be NP-hard, and frequently requires considering the states of all the robots, resulting in an exponentially growing joint state and action space. In many mrp domains, robots encounter situations where they have conflicting needs for constrained resources, or where they can take advantage of what each other is doing to form synergies. In this article, we formulate the problem of multi-robot planning with conflicts and synergies (mrpcs), and develop a multi-robot planning framework, called iterative inter-dependent planning (iidp), for representing and solving mrpcs problems. Within the iidp framework, we develop the algorithms of increasing dependency and best alternative which exhibit different trade-offs between plan quality and computational efficiency. Extensive experiments covering the suggested algorithms have been performed using both an abstract-domain simulator, where we can automatically generate a variety of domain configurations, and a practical mrpcs instantiation that focuses on multi-robot navigation. In the navigation domain, we model plan costs with temporal uncertainty, and present a novel shifted-Poisson distribution for accumulating temporal uncertainty over actions. In comparison to baseline approaches, our algorithms produce significant reductions in overall plan cost, while avoiding search in the joint state space. In addition, we present a complete demonstration of the implementation of the model on a team of real robots.

author list (cited authors)

  • Jiang, Y., Yedidsion, H., Zhang, S., Sharon, G., & Stone, P.

publication date

  • January 1, 2019 11:11 AM