Confident Approximate Policy Iteration for Efficient Local Planning in $q^π$-realizable MDPs. Weisz, G., György, A., Kozuno, T., & Szepesvári, C. In NeurIPS, 11, 2022. Link Paper abstract bibtex We consider approximate dynamic programming in $γ$-discounted Markov decision processes and apply it to approximate planning with linear value-function approximation. Our first contribution is a new variant of approximate policy iteration (API), called confident approximate policy iteration (CAPI), which computes a deterministic stationary policy with an optimal error bound scaling linearly with the product of the effective horizon $H$ and the worst-case approximation error $ɛ$ of the action-value functions of stationary policies. This improvement over API (whose error scales with $H^2$) comes at the price of an $H$-fold increase in memory cost. Unlike Scherrer and Lesner [2012], who recommended computing a non-stationary policy to achieve a similar improvement (with the same memory overhead), we are able to stick to stationary policies. This allows for our second contribution, the application of CAPI to planning with local access to a simulator and $d$-dimensional linear function approximation. As such, we design a planning algorithm that applies CAPI to obtain a sequence of policies with successively refined accuracies on a dynamically evolving set of states. The algorithm outputs an $Õ( \sqrt{dHɛ})$-optimal policy after issuing $Õ(dH^4/ɛ^2)$ queries to the simulator, simultaneously achieving the optimal accuracy bound and the best known query complexity bound, while earlier algorithms in the literature achieve only one of them. This query complexity is shown to be tight in all parameters except $H$. These improvements come at the expense of a mild (polynomial) increase in memory and computational costs of both the algorithm and its output policy.
@inproceedings{wegysz22,
title = {Confident Approximate Policy Iteration for Efficient Local Planning in $q^\pi$-realizable MDPs},
author = {Weisz, Gell\'ert and Gy\"orgy, Andr\'as and Kozuno, Tadashi and Szepesv{\'a}ri, Csaba},
booktitle = {NeurIPS},
acceptrate= {2665 out of 10411 = 25.6\%},
month = {11},
year = {2022},
url_link={},
url_paper = {NeurIPS2022_CAPI.pdf},
abstract = {We consider approximate dynamic programming in $\gamma$-discounted Markov decision processes and apply it to approximate planning with linear value-function approximation. Our first contribution is a new variant of approximate policy iteration (API), called confident approximate policy iteration (CAPI), which computes a deterministic stationary policy with an optimal error bound scaling linearly with the product of the effective horizon $H$ and the worst-case approximation error $\varepsilon$ of the action-value functions of stationary policies. This improvement over API (whose error scales with $H^2$) comes at the price of an $H$-fold increase in memory cost. Unlike Scherrer and Lesner [2012], who recommended computing a non-stationary policy to achieve a similar improvement (with the same memory overhead), we are able to stick to stationary policies. This allows for our second contribution, the application of CAPI to planning with local access to a simulator and $d$-dimensional linear function approximation. As such, we design a planning algorithm that applies CAPI to obtain a sequence of policies with successively refined accuracies on a dynamically evolving set of states. The algorithm outputs an $\tilde{O}( \sqrt{dH\varepsilon})$-optimal policy after issuing $\tilde{O}(dH^4/\varepsilon^2)$ queries to the simulator, simultaneously achieving the optimal accuracy bound and the best known query complexity bound, while earlier algorithms in the literature achieve only one of them. This query complexity is shown to be tight in all parameters except $H$. These improvements come at the expense of a mild (polynomial) increase in memory and computational costs of both the algorithm and its output policy.},
keywords = {reinforcement learning, policy iteration, local planning, simulators, MDPs, linear function approximation}
}
Downloads: 0
{"_id":"CKzWPitPLFJDAj6qp","bibbaseid":"weisz-gyrgy-kozuno-szepesvri-confidentapproximatepolicyiterationforefficientlocalplanninginqrealizablemdps-2022","author_short":["Weisz, G.","György, A.","Kozuno, T.","Szepesvári, C."],"bibdata":{"bibtype":"inproceedings","type":"inproceedings","title":"Confident Approximate Policy Iteration for Efficient Local Planning in $q^π$-realizable MDPs","author":[{"propositions":[],"lastnames":["Weisz"],"firstnames":["Gellért"],"suffixes":[]},{"propositions":[],"lastnames":["György"],"firstnames":["András"],"suffixes":[]},{"propositions":[],"lastnames":["Kozuno"],"firstnames":["Tadashi"],"suffixes":[]},{"propositions":[],"lastnames":["Szepesvári"],"firstnames":["Csaba"],"suffixes":[]}],"booktitle":"NeurIPS","acceptrate":"2665 out of 10411 = 25.6%","month":"11","year":"2022","url_link":"","url_paper":"NeurIPS2022_CAPI.pdf","abstract":"We consider approximate dynamic programming in $γ$-discounted Markov decision processes and apply it to approximate planning with linear value-function approximation. Our first contribution is a new variant of approximate policy iteration (API), called confident approximate policy iteration (CAPI), which computes a deterministic stationary policy with an optimal error bound scaling linearly with the product of the effective horizon $H$ and the worst-case approximation error $ɛ$ of the action-value functions of stationary policies. This improvement over API (whose error scales with $H^2$) comes at the price of an $H$-fold increase in memory cost. Unlike Scherrer and Lesner [2012], who recommended computing a non-stationary policy to achieve a similar improvement (with the same memory overhead), we are able to stick to stationary policies. This allows for our second contribution, the application of CAPI to planning with local access to a simulator and $d$-dimensional linear function approximation. As such, we design a planning algorithm that applies CAPI to obtain a sequence of policies with successively refined accuracies on a dynamically evolving set of states. The algorithm outputs an $Õ( \\sqrt{dHɛ})$-optimal policy after issuing $Õ(dH^4/ɛ^2)$ queries to the simulator, simultaneously achieving the optimal accuracy bound and the best known query complexity bound, while earlier algorithms in the literature achieve only one of them. This query complexity is shown to be tight in all parameters except $H$. These improvements come at the expense of a mild (polynomial) increase in memory and computational costs of both the algorithm and its output policy.","keywords":"reinforcement learning, policy iteration, local planning, simulators, MDPs, linear function approximation","bibtex":"@inproceedings{wegysz22,\ntitle = {Confident Approximate Policy Iteration for Efficient Local Planning in $q^\\pi$-realizable MDPs},\nauthor = {Weisz, Gell\\'ert and Gy\\\"orgy, Andr\\'as and Kozuno, Tadashi and Szepesv{\\'a}ri, Csaba},\nbooktitle = {NeurIPS},\nacceptrate= {2665 out of 10411 = 25.6\\%},\nmonth = {11},\nyear = {2022},\nurl_link={},\nurl_paper = {NeurIPS2022_CAPI.pdf},\nabstract = {We consider approximate dynamic programming in $\\gamma$-discounted Markov decision processes and apply it to approximate planning with linear value-function approximation. Our first contribution is a new variant of approximate policy iteration (API), called confident approximate policy iteration (CAPI), which computes a deterministic stationary policy with an optimal error bound scaling linearly with the product of the effective horizon $H$ and the worst-case approximation error $\\varepsilon$ of the action-value functions of stationary policies. This improvement over API (whose error scales with $H^2$) comes at the price of an $H$-fold increase in memory cost. Unlike Scherrer and Lesner [2012], who recommended computing a non-stationary policy to achieve a similar improvement (with the same memory overhead), we are able to stick to stationary policies. This allows for our second contribution, the application of CAPI to planning with local access to a simulator and $d$-dimensional linear function approximation. As such, we design a planning algorithm that applies CAPI to obtain a sequence of policies with successively refined accuracies on a dynamically evolving set of states. The algorithm outputs an $\\tilde{O}( \\sqrt{dH\\varepsilon})$-optimal policy after issuing $\\tilde{O}(dH^4/\\varepsilon^2)$ queries to the simulator, simultaneously achieving the optimal accuracy bound and the best known query complexity bound, while earlier algorithms in the literature achieve only one of them. This query complexity is shown to be tight in all parameters except $H$. These improvements come at the expense of a mild (polynomial) increase in memory and computational costs of both the algorithm and its output policy.},\nkeywords = {reinforcement learning, policy iteration, local planning, simulators, MDPs, linear function approximation}\n}\n\n\n","author_short":["Weisz, G.","György, A.","Kozuno, T.","Szepesvári, C."],"key":"wegysz22","id":"wegysz22","bibbaseid":"weisz-gyrgy-kozuno-szepesvri-confidentapproximatepolicyiterationforefficientlocalplanninginqrealizablemdps-2022","role":"author","urls":{" link":"https://www.ualberta.ca/~szepesva/papers/p2.bib"," paper":"https://www.ualberta.ca/~szepesva/papers/NeurIPS2022_CAPI.pdf"},"keyword":["reinforcement learning","policy iteration","local planning","simulators","MDPs","linear function approximation"],"metadata":{"authorlinks":{}},"downloads":0,"html":""},"bibtype":"inproceedings","biburl":"https://www.ualberta.ca/~szepesva/papers/p2.bib","dataSources":["dYMomj4Jofy8t4qmm"],"keywords":["reinforcement learning","policy iteration","local planning","simulators","mdps","linear function approximation"],"search_terms":["confident","approximate","policy","iteration","efficient","local","planning","realizable","mdps","weisz","györgy","kozuno","szepesvári"],"title":"Confident Approximate Policy Iteration for Efficient Local Planning in $q^π$-realizable MDPs","year":2022}