Exploration by Random Network Distillation. Burda, Y., Edwards, H., Storkey, A., & Klimov, O.
Exploration by Random Network Distillation [link]Paper  abstract   bibtex   
We introduce an exploration bonus for deep reinforcement learning methods that is easy to implement and adds minimal overhead to the computation performed. The bonus is the error of a neural network predicting features of the observations given by a fixed randomly initialized neural network. We also introduce a method to flexibly combine intrinsic and extrinsic rewards. We find that the random network distillation (RND) bonus combined with this increased flexibility enables significant progress on several hard exploration Atari games. In particular we establish state of the art performance on Montezuma's Revenge, a game famously difficult for deep reinforcement learning methods. To the best of our knowledge, this is the first method that achieves better than average human performance on this game without using demonstrations or having access to the underlying state of the game, and occasionally completes the first level.
@article{burdaExplorationRandomNetwork2018,
  archivePrefix = {arXiv},
  eprinttype = {arxiv},
  eprint = {1810.12894},
  primaryClass = {cs, stat},
  title = {Exploration by {{Random Network Distillation}}},
  url = {http://arxiv.org/abs/1810.12894},
  abstract = {We introduce an exploration bonus for deep reinforcement learning methods that is easy to implement and adds minimal overhead to the computation performed. The bonus is the error of a neural network predicting features of the observations given by a fixed randomly initialized neural network. We also introduce a method to flexibly combine intrinsic and extrinsic rewards. We find that the random network distillation (RND) bonus combined with this increased flexibility enables significant progress on several hard exploration Atari games. In particular we establish state of the art performance on Montezuma's Revenge, a game famously difficult for deep reinforcement learning methods. To the best of our knowledge, this is the first method that achieves better than average human performance on this game without using demonstrations or having access to the underlying state of the game, and occasionally completes the first level.},
  urldate = {2019-01-15},
  date = {2018-10-30},
  keywords = {Statistics - Machine Learning,Computer Science - Artificial Intelligence,Computer Science - Machine Learning},
  author = {Burda, Yuri and Edwards, Harrison and Storkey, Amos and Klimov, Oleg},
  file = {/home/dimitri/Nextcloud/Zotero/storage/IDJN2LWZ/Burda et al. - 2018 - Exploration by Random Network Distillation.pdf;/home/dimitri/Nextcloud/Zotero/storage/JQG98I96/1810.html}
}

Downloads: 0