Scaling Up Reinforcement Learning through Targeted Exploration Conference Paper uri icon

abstract

  • Recent Reinforcement Learning (RL) algorithms, such as R-MAX, make (with high probability) only a small number of poor decisions. In practice, these algorithms do not scale well as the number of states grows because the algorithms spend too much effort exploring. We introduce an RL algorithm State TArgeted R-MAX (STAR-MAX) that explores a subset of the state space, called the exploration envelope . When equals the total state space, STAR-MAX behaves identically to R-MAX. When is a subset of the state space, to keep exploration within , a recovery rule is needed. We compared existing algorithms with our algorithm employing various exploration envelopes. With an appropriate choice of , STAR-MAX scales far better than existing RL algorithms as the number of states increases. A possible drawback of our algorithm is its dependence on a good choice of and . However, we show that an effective recovery rule can be learned on-line and can be learned from demonstrations. We also find that even randomly sampled exploration envelopes can improve cumulative rewards compared to R-MAX. We expect these results to lead to more efficient methods for RL in large-scale problems.

published proceedings

  • Proceedings of the AAAI Conference on Artificial Intelligence

author list (cited authors)

  • Mann, T., & Choe, Y.

citation count

  • 1

complete list of authors

  • Mann, Timothy||Choe, Yoonsuck

publication date

  • November 2011