Understanding the Effect of Stochasticity in Policy Optimization. Mei, J., Dai, B., Xiao, C., Szepesvári, C., & Schuurmans, D. In NeurIPS, pages 19339–19351, 2021. Link Paper abstract bibtex We study the effect of stochasticity in on-policy policy optimization, and make the following four contributions. First, we show that the preferability of optimization methods depends critically on whether stochastic versus exact gradients are used. In particular, unlike the true gradient setting, geometric information cannot be easily exploited in the stochastic case for accelerating policy optimization without detrimental consequences or impractical assumptions. Second, to explain these findings we introduce the concept of committal rate for stochastic policy optimization, and show that this can serve as a criterion for determining almost sure convergence to global optimality. Third, we show that in the absence of external oracle information, which allows an algorithm to determine the difference between optimal and sub-optimal actions given only on-policy samples, there is an inherent trade-off between exploiting geometry to accelerate convergence versus achieving optimality almost surely. That is, an uninformed algorithm either converges to a globally optimal policy with probability $1$ but at a rate no better than $O(1/t)$, or it achieves faster than $O(1/t)$ convergence but then must fail to converge to the globally optimal policy with some positive probability. Finally, we use the committal rate theory to explain why practical policy optimization methods are sensitive to random initialization, then develop an ensemble method that can be guaranteed to achieve near-optimal solutions with high probability.
@inproceedings{MeiDXSzS21,
author = {Jincheng Mei and Bo Dai and Chenjun Xiao and Csaba Szepesv\'ari and Dale Schuurmans},
title = {Understanding the Effect of Stochasticity in Policy Optimization},
booktitle = {NeurIPS},
crossref = {NeurIPS2021poster},
pages = {19339--19351},
year = {2021},
url_link = {https://proceedings.neurips.cc/paper/2021/hash/a12f69495f41bb3b637ba1b6238884d6-Abstract.html},
url_paper = {NeurIPS2021_commitalrate.pdf},
abstract = {We study the effect of stochasticity in on-policy policy optimization, and make the following four contributions.
First, we show that the preferability of optimization methods depends critically on whether stochastic versus exact gradients are used. In particular, unlike the true gradient setting, geometric information <em>cannot</em> be easily exploited in the stochastic case for accelerating policy optimization without detrimental consequences or impractical assumptions.
Second, to explain these findings we introduce the concept of <em>committal rate</em> for stochastic policy optimization, and show that this can serve as a criterion for determining almost sure convergence to global optimality.
Third, we show that in the absence of external oracle information, which allows an algorithm to determine the difference between optimal and sub-optimal actions given only on-policy samples, there is an inherent trade-off between exploiting geometry to accelerate convergence versus achieving optimality almost surely. That is, an uninformed algorithm either converges to a globally optimal policy with probability $1$ but at a rate no better than $O(1/t)$, or it achieves faster than $O(1/t)$ convergence but then must fail to converge to the globally optimal policy with some positive probability.
Finally, we use the committal rate theory to explain why practical policy optimization methods are sensitive to random initialization, then develop an ensemble method that can be guaranteed to achieve near-optimal solutions with high probability.}
}
Downloads: 0
{"_id":"b83JkA9hzZu6ywHzB","bibbaseid":"mei-dai-xiao-szepesvri-schuurmans-understandingtheeffectofstochasticityinpolicyoptimization-2021","author_short":["Mei, J.","Dai, B.","Xiao, C.","Szepesvári, C.","Schuurmans, D."],"bibdata":{"bibtype":"inproceedings","type":"inproceedings","author":[{"firstnames":["Jincheng"],"propositions":[],"lastnames":["Mei"],"suffixes":[]},{"firstnames":["Bo"],"propositions":[],"lastnames":["Dai"],"suffixes":[]},{"firstnames":["Chenjun"],"propositions":[],"lastnames":["Xiao"],"suffixes":[]},{"firstnames":["Csaba"],"propositions":[],"lastnames":["Szepesvári"],"suffixes":[]},{"firstnames":["Dale"],"propositions":[],"lastnames":["Schuurmans"],"suffixes":[]}],"title":"Understanding the Effect of Stochasticity in Policy Optimization","booktitle":"NeurIPS","crossref":"NeurIPS2021poster","pages":"19339–19351","year":"2021","url_link":"https://proceedings.neurips.cc/paper/2021/hash/a12f69495f41bb3b637ba1b6238884d6-Abstract.html","url_paper":"NeurIPS2021_commitalrate.pdf","abstract":"We study the effect of stochasticity in on-policy policy optimization, and make the following four contributions. First, we show that the preferability of optimization methods depends critically on whether stochastic versus exact gradients are used. In particular, unlike the true gradient setting, geometric information <em>cannot</em> be easily exploited in the stochastic case for accelerating policy optimization without detrimental consequences or impractical assumptions. Second, to explain these findings we introduce the concept of <em>committal rate</em> for stochastic policy optimization, and show that this can serve as a criterion for determining almost sure convergence to global optimality. Third, we show that in the absence of external oracle information, which allows an algorithm to determine the difference between optimal and sub-optimal actions given only on-policy samples, there is an inherent trade-off between exploiting geometry to accelerate convergence versus achieving optimality almost surely. That is, an uninformed algorithm either converges to a globally optimal policy with probability $1$ but at a rate no better than $O(1/t)$, or it achieves faster than $O(1/t)$ convergence but then must fail to converge to the globally optimal policy with some positive probability. Finally, we use the committal rate theory to explain why practical policy optimization methods are sensitive to random initialization, then develop an ensemble method that can be guaranteed to achieve near-optimal solutions with high probability.","bibtex":"@inproceedings{MeiDXSzS21,\n author = {Jincheng Mei and Bo Dai and Chenjun Xiao and Csaba Szepesv\\'ari and Dale Schuurmans},\n title = {Understanding the Effect of Stochasticity in Policy Optimization},\n booktitle = {NeurIPS},\n crossref = {NeurIPS2021poster},\n pages = {19339--19351},\n year = {2021},\n url_link = {https://proceedings.neurips.cc/paper/2021/hash/a12f69495f41bb3b637ba1b6238884d6-Abstract.html},\n url_paper = {NeurIPS2021_commitalrate.pdf},\n abstract = {We study the effect of stochasticity in on-policy policy optimization, and make the following four contributions. \n First, we show that the preferability of optimization methods depends critically on whether stochastic versus exact gradients are used. In particular, unlike the true gradient setting, geometric information <em>cannot</em> be easily exploited in the stochastic case for accelerating policy optimization without detrimental consequences or impractical assumptions.\n Second, to explain these findings we introduce the concept of <em>committal rate</em> for stochastic policy optimization, and show that this can serve as a criterion for determining almost sure convergence to global optimality.\n Third, we show that in the absence of external oracle information, which allows an algorithm to determine the difference between optimal and sub-optimal actions given only on-policy samples, there is an inherent trade-off between exploiting geometry to accelerate convergence versus achieving optimality almost surely. That is, an uninformed algorithm either converges to a globally optimal policy with probability $1$ but at a rate no better than $O(1/t)$, or it achieves faster than $O(1/t)$ convergence but then must fail to converge to the globally optimal policy with some positive probability.\n Finally, we use the committal rate theory to explain why practical policy optimization methods are sensitive to random initialization, then develop an ensemble method that can be guaranteed to achieve near-optimal solutions with high probability.}\n}\n\n","author_short":["Mei, J.","Dai, B.","Xiao, C.","Szepesvári, C.","Schuurmans, D."],"key":"MeiDXSzS21","id":"MeiDXSzS21","bibbaseid":"mei-dai-xiao-szepesvri-schuurmans-understandingtheeffectofstochasticityinpolicyoptimization-2021","role":"author","urls":{" link":"https://proceedings.neurips.cc/paper/2021/hash/a12f69495f41bb3b637ba1b6238884d6-Abstract.html"," paper":"https://www.ualberta.ca/~szepesva/papers/NeurIPS2021_commitalrate.pdf"},"metadata":{"authorlinks":{}},"downloads":0,"html":""},"bibtype":"inproceedings","biburl":"https://www.ualberta.ca/~szepesva/papers/p2.bib","dataSources":["dYMomj4Jofy8t4qmm"],"keywords":[],"search_terms":["understanding","effect","stochasticity","policy","optimization","mei","dai","xiao","szepesvári","schuurmans"],"title":"Understanding the Effect of Stochasticity in Policy Optimization","year":2021,"downloads":1}