Fitted Q-iteration in Continuous Action-space MDPs. Antos, A., Munos, R., & Szepesvári, C. In Advances in Neural Information Processing Systems, pages 9–16, 2007.
Fitted Q-iteration in Continuous Action-space MDPs [pdf]Paper  abstract   bibtex   3 downloads  
We consider continuous state, continuous action batch reinforcement learning where the goal is to learn a good policy from a sufficiently rich trajectory generated by some policy. We study a variant of fitted Q-iteration, where the greedy action selection is replaced by searching for a policy in a restricted set of candidate policies by maximizing the average action values. We provide a rigorous analysis of this algorithm, proving what we believe is the first finite-time bound for value-function based algorithms for continuous state and action problems. Note: In retrospect, it would have been better to call this algorithm an actor-critic algorithm. The algorithm that we considers updates a policy and a value function (action-value function in this case).

Downloads: 3