MT-LQG: Multi-Agent Planning in Belief Space via Trajectory-Optimized LQG**This material is based upon work partially supported by the U.S. Army Research Office under Contract No. W911NF-15-1-0279 the NSF under Science & Technology Center Grant CCF-0939370 and NPRP 6-784-2-329 from the Qatar National Research Fund, a member of Qatar Foundation. Conference Paper uri icon


  • 2017 IEEE. Belief space planning is concerned with the problem of finding the control policy under process and measurement uncertainties. Formulated as a stochastic control problem, the solution of a general Decentralized Partially Observed Markov Decision Process (Dec-POMDP) is a collection of feedback policies for individual agents, maximizing a joint value function. In this paper, we design (m) number of Linear Quadratic Gaussian (LQG) policies for (m) number of agents maximizing the joint performance of the team. Casting the problem as a NonLinear Program (NLP), we propose a framework that reduces the optimization dimension from ((mn)2+ mn) to (mn) with (n) referring to the dimension of each individual agent's state space. As a result, the proposed method reduces the formidable generic Dec-POMDP to a computationally tractable multi-agent planning under uncertainty. Our results in 2D and 3D environments demonstrate the performance of the algorithm and its ability to predict and avoid inter-agent collisions.

name of conference

  • 2017 IEEE International Conference on Robotics and Automation (ICRA)

published proceedings

  • 2017 IEEE International Conference on Robotics and Automation (ICRA)

author list (cited authors)

  • Rafieisakhaei, M., Chakravorty, S., & Kumar, P. R.

citation count

  • 0

complete list of authors

  • Rafieisakhaei, Mohammadhussein||Chakravorty, Suman||Kumar, PR

publication date

  • January 2017