Bounding procedure for stochastic dynamic programs with application to the perimeter patrol problem Conference Paper uri icon

abstract

  • One often encounters the curse of dimensionality in the application of dynamic programming to determine optimal policies for controlled Markov chains. In this paper, we provide a method to construct sub-optimal policies along with a bound for the deviation of such a policy from the optimum via a linear programming approach. The state-space is partitioned and the optimal cost-to-go or value function is approximated by a constant over each partition. By minimizing a positive cost function defined on the partitions, one can construct an approximate value function which also happens to be an upper bound for the optimal value function of the original Markov Decision Process (MDP). As a key result, we show that this approximate value function is independent of the positive cost function (or state dependent weights; as it is referred to, in the literature) and moreover, this is the least upper bound that one can obtain; once the partitions are specified. We apply the linear programming approach to a perimeter surveillance stochastic optimal control problem; whose structure enables efficient computation of the upper bound. 2012 AACC American Automatic Control Council).

name of conference

  • 2012 American Control Conference (ACC)

published proceedings

  • Proceedings of the 2010 American Control Conference

author list (cited authors)

  • Krishnamoorthy, K., Park, M., Darbha, S., Pachter, M., & Chandler, P.

citation count

  • 4

complete list of authors

  • Krishnamoorthy, K||Park, M||Darbha, S||Pachter, M||Chandler, P

publication date

  • January 2012