Iteration. Yin, D., Abbasi-Yadkori, Y., & Szepesvári, C. In pages 6032–6042. Paper Link abstract bibtex In this work, we study algorithms for learning in infinite-horizon undiscounted Markov decision processes (MDPs) with function approximation. We first show that the regret analysis of the Politex algorithm (a version of regularized policy iteration) can be sharpened from $O(T^{3/4})$ to $O(\sqrt{T})$ under nearly identical assumptions, and instantiate the bound with linear function approximation. Our result provides the first high-probability $O(\sqrt{T})$ regret bound for a computationally efficient algorithm in this setting. The exact implementation of Politex with neural network function approximation is inefficient in terms of memory and computation. Since our analysis suggests that we need to approximate the average of the action-value functions of past policies well, we propose a simple efficient implementation where we train a single Q-function on a replay buffer with past data. We show that this often leads to superior performance over other implementation choices, especially in terms of wall-clock time. Our work also provides a novel theoretical justification for using experience replay within policy iteration algorithms.

@inproceedings{DBLP:conf/icml/LazicYAS21,
author = {
Dong Yin and
Yasin Abbasi{-}Yadkori and
Csaba Szepesv{\'{a}}ri},
title = {
Iteration},
pages = {6032--6042},
crossref = {ICML2021longpres},
url_paper = {ICML2021-Politex.pdf},
url_link = {http://proceedings.mlr.press/v139/lazic21a.html},
abstract = {In this work, we study algorithms for learning in infinite-horizon undiscounted Markov decision processes (MDPs) with function approximation. We first show that the regret analysis of the Politex algorithm (a version of regularized policy iteration) can be sharpened from $O(T^{3/4})$ to $O(\sqrt{T})$ under nearly identical assumptions, and instantiate the bound with linear function approximation. Our result provides the first high-probability $O(\sqrt{T})$ regret bound for a computationally efficient algorithm in this setting. The exact implementation of Politex with neural network function approximation is inefficient in terms of memory and computation. Since our analysis suggests that we need to approximate the average of the action-value functions of past policies well, we propose a simple efficient implementation where we train a single Q-function on a replay buffer with past data. We show that this often leads to superior performance over other implementation choices, especially in terms of wall-clock time. Our work also provides a novel theoretical justification for using experience replay within policy iteration algorithms.},
}

Downloads: 0

{"_id":"PYTFB7rxzC5HKPLrx","bibbaseid":"yin-abbasiyadkori-szepesvri-iteration","author_short":["Yin, D.","Abbasi-Yadkori, Y.","Szepesvári, C."],"bibdata":{"bibtype":"inproceedings","type":"inproceedings","author":[{"firstnames":["Dong"],"propositions":[],"lastnames":["Yin"],"suffixes":[]},{"firstnames":["Yasin"],"propositions":[],"lastnames":["Abbasi-Yadkori"],"suffixes":[]},{"firstnames":["Csaba"],"propositions":[],"lastnames":["Szepesvári"],"suffixes":[]}],"title":"Iteration","pages":"6032–6042","crossref":"ICML2021longpres","url_paper":"ICML2021-Politex.pdf","url_link":"http://proceedings.mlr.press/v139/lazic21a.html","abstract":"In this work, we study algorithms for learning in infinite-horizon undiscounted Markov decision processes (MDPs) with function approximation. We first show that the regret analysis of the Politex algorithm (a version of regularized policy iteration) can be sharpened from $O(T^{3/4})$ to $O(\\sqrt{T})$ under nearly identical assumptions, and instantiate the bound with linear function approximation. Our result provides the first high-probability $O(\\sqrt{T})$ regret bound for a computationally efficient algorithm in this setting. The exact implementation of Politex with neural network function approximation is inefficient in terms of memory and computation. Since our analysis suggests that we need to approximate the average of the action-value functions of past policies well, we propose a simple efficient implementation where we train a single Q-function on a replay buffer with past data. We show that this often leads to superior performance over other implementation choices, especially in terms of wall-clock time. Our work also provides a novel theoretical justification for using experience replay within policy iteration algorithms.","bibtex":"@inproceedings{DBLP:conf/icml/LazicYAS21,\n author = {\n Dong Yin and\n Yasin Abbasi{-}Yadkori and\n Csaba Szepesv{\\'{a}}ri},\n title = {\n Iteration},\n pages = {6032--6042},\n crossref = {ICML2021longpres},\n url_paper = {ICML2021-Politex.pdf},\n url_link = {http://proceedings.mlr.press/v139/lazic21a.html},\n abstract = {In this work, we study algorithms for learning in infinite-horizon undiscounted Markov decision processes (MDPs) with function approximation. We first show that the regret analysis of the Politex algorithm (a version of regularized policy iteration) can be sharpened from $O(T^{3/4})$ to $O(\\sqrt{T})$ under nearly identical assumptions, and instantiate the bound with linear function approximation. Our result provides the first high-probability $O(\\sqrt{T})$ regret bound for a computationally efficient algorithm in this setting. The exact implementation of Politex with neural network function approximation is inefficient in terms of memory and computation. Since our analysis suggests that we need to approximate the average of the action-value functions of past policies well, we propose a simple efficient implementation where we train a single Q-function on a replay buffer with past data. We show that this often leads to superior performance over other implementation choices, especially in terms of wall-clock time. Our work also provides a novel theoretical justification for using experience replay within policy iteration algorithms.},\n}\n\n","author_short":["Yin, D.","Abbasi-Yadkori, Y.","Szepesvári, C."],"key":"DBLP:conf/icml/LazicYAS21","id":"DBLP:conf/icml/LazicYAS21","bibbaseid":"yin-abbasiyadkori-szepesvri-iteration","role":"author","urls":{" paper":"https://www.ualberta.ca/~szepesva/papers/ICML2021-Politex.pdf"," link":"http://proceedings.mlr.press/v139/lazic21a.html"},"metadata":{"authorlinks":{}},"html":""},"bibtype":"inproceedings","biburl":"https://www.ualberta.ca/~szepesva/papers/p2.bib","dataSources":["Ciq2jeFvPFYBCoxwJ","v2PxY4iCzrNyY9fhF"],"keywords":[],"search_terms":["iteration","yin","abbasi-yadkori","szepesvári"],"title":"Iteration","year":null}