Evolution Strategies as a Scalable Alternative to Reinforcement Learning. Salimans, T., Ho, J., Chen, X., Sidor, S., & Sutskever, I. 3, 2017.
Evolution Strategies as a Scalable Alternative to Reinforcement Learning [link]Website  abstract   bibtex   
We explore the use of Evolution Strategies (ES), a class of black box optimization algorithms, as an alternative to popular MDP-based RL techniques such as Q-learning and Policy Gradients. Experiments on MuJoCo and Atari show that ES is a viable solution strategy that scales extremely well with the number of CPUs available: By using a novel communication strategy based on common random numbers, our ES implementation only needs to communicate scalars, making it possible to scale to over a thousand parallel workers. This allows us to solve 3D humanoid walking in 10 minutes and obtain competitive results on most Atari games after one hour of training. In addition, we highlight several advantages of ES as a black box optimization technique: it is invariant to action frequency and delayed rewards, tolerant of extremely long horizons, and does not need temporal discounting or value function approximation.
@article{
 title = {Evolution Strategies as a Scalable Alternative to Reinforcement Learning},
 type = {article},
 year = {2017},
 identifiers = {[object Object]},
 websites = {http://arxiv.org/abs/1703.03864},
 month = {3},
 day = {10},
 id = {2bbc28ff-da58-381b-80dd-9b4aea659a30},
 created = {2019-10-28T09:02:03.996Z},
 file_attached = {false},
 profile_id = {09196527-c9a6-3143-8a89-7e77d6cafa67},
 group_id = {20cebcbd-6e01-3d82-bee4-361f544530c0},
 last_modified = {2019-10-28T09:02:03.996Z},
 tags = {evolutionary_algorithms,reinforcement_learning},
 read = {false},
 starred = {true},
 authored = {false},
 confirmed = {true},
 hidden = {false},
 citation_key = {salimans2017evolution},
 source_type = {misc},
 private_publication = {false},
 abstract = {We explore the use of Evolution Strategies (ES), a class of black box optimization algorithms, as an alternative to popular MDP-based RL techniques such as Q-learning and Policy Gradients. Experiments on MuJoCo and Atari show that ES is a viable solution strategy that scales extremely well with the number of CPUs available: By using a novel communication strategy based on common random numbers, our ES implementation only needs to communicate scalars, making it possible to scale to over a thousand parallel workers. This allows us to solve 3D humanoid walking in 10 minutes and obtain competitive results on most Atari games after one hour of training. In addition, we highlight several advantages of ES as a black box optimization technique: it is invariant to action frequency and delayed rewards, tolerant of extremely long horizons, and does not need temporal discounting or value function approximation.},
 bibtype = {article},
 author = {Salimans, Tim and Ho, Jonathan and Chen, Xi and Sidor, Szymon and Sutskever, Ilya}
}

Downloads: 0