Robust Reinforcement Learning: A Constrained Game-theoretic Approach. Yu, J., Gehring, C., Schäfer, F., & Anandkumar, A.
abstract   bibtex   
Reinforcement learning (RL) methods provide state-of-art performance in complex control tasks. However, it has been widely recognized that RL methods often fail to generalize due to unaccounted uncertainties. In this work, we propose a game theoretic framework for robust reinforcement learning that comprises many previous works as special cases. We formulate robust RL as a constrained minimax game between the RL agent and an environmental agent which represents uncertainties such as model parameter variations and adversarial disturbances. To solve the competitive optimization problems arising in our framework, we propose to use competitive mirror descent (CMD). This method accounts for the interactive nature of the game at each iteration while using Bregman divergences to adapt to the global structure of the constraint set. leveraging Lagrangian duality, we demonstrate an RRL policy gradient algorithm based on CMD. We empirically show that our algorithm is stable for large step sizes, resulting in faster convergence on constrained linear quadratic games.
@article{yu_robust_nodate,
	title = {Robust {Reinforcement} {Learning}: {A} {Constrained} {Game}-theoretic {Approach}},
	abstract = {Reinforcement learning (RL) methods provide state-of-art performance in complex control tasks. However, it has been widely recognized that RL methods often fail to generalize due to unaccounted uncertainties. In this work, we propose a game theoretic framework for robust reinforcement learning that comprises many previous works as special cases. We formulate robust RL as a constrained minimax game between the RL agent and an environmental agent which represents uncertainties such as model parameter variations and adversarial disturbances. To solve the competitive optimization problems arising in our framework, we propose to use competitive mirror descent (CMD). This method accounts for the interactive nature of the game at each iteration while using Bregman divergences to adapt to the global structure of the constraint set. leveraging Lagrangian duality, we demonstrate an RRL policy gradient algorithm based on CMD. We empirically show that our algorithm is stable for large step sizes, resulting in faster convergence on constrained linear quadratic games.},
	language = {zh-CN},
	author = {Yu, Jing and Gehring, Clement and Schäfer, Florian and Anandkumar, Animashree},
}

Downloads: 0