Deep Reinforcement Learning That Matters. Henderson, P., Islam, R., Bachman, P., Pineau, J., Precup, D., & Meger, D.
Deep Reinforcement Learning That Matters [link]Paper  abstract   bibtex   
In recent years, significant progress has been made in solving challenging problems across various domains using deep reinforcement learning (RL). Reproducing existing work and accurately judging the improvements offered by novel methods is vital to sustaining this progress. Unfortunately, reproducing results for state-of-the-art deep RL methods is seldom straightforward. In particular, non-determinism in standard benchmark environments, combined with variance intrinsic to the methods, can make reported results tough to interpret. Without significance metrics and tighter standardization of experimental reporting, it is difficult to determine whether improvements over the prior state-of-the-art are meaningful. In this paper, we investigate challenges posed by reproducibility, proper experimental techniques, and reporting procedures. We illustrate the variability in reported metrics and results when comparing against common baselines and suggest guidelines to make future results in deep RL more reproducible. We aim to spur discussion about how to ensure continued progress in the field by minimizing wasted effort stemming from results that are non-reproducible and easily misinterpreted.
@article{hendersonDeepReinforcementLearning2017,
  archivePrefix = {arXiv},
  eprinttype = {arxiv},
  eprint = {1709.06560},
  primaryClass = {cs, stat},
  title = {Deep {{Reinforcement Learning}} That {{Matters}}},
  url = {http://arxiv.org/abs/1709.06560},
  abstract = {In recent years, significant progress has been made in solving challenging problems across various domains using deep reinforcement learning (RL). Reproducing existing work and accurately judging the improvements offered by novel methods is vital to sustaining this progress. Unfortunately, reproducing results for state-of-the-art deep RL methods is seldom straightforward. In particular, non-determinism in standard benchmark environments, combined with variance intrinsic to the methods, can make reported results tough to interpret. Without significance metrics and tighter standardization of experimental reporting, it is difficult to determine whether improvements over the prior state-of-the-art are meaningful. In this paper, we investigate challenges posed by reproducibility, proper experimental techniques, and reporting procedures. We illustrate the variability in reported metrics and results when comparing against common baselines and suggest guidelines to make future results in deep RL more reproducible. We aim to spur discussion about how to ensure continued progress in the field by minimizing wasted effort stemming from results that are non-reproducible and easily misinterpreted.},
  urldate = {2019-04-16},
  date = {2017-09-19},
  keywords = {Statistics - Machine Learning,Computer Science - Machine Learning},
  author = {Henderson, Peter and Islam, Riashat and Bachman, Philip and Pineau, Joelle and Precup, Doina and Meger, David},
  file = {/home/dimitri/Nextcloud/Zotero/storage/PJEJP7R9/Henderson et al. - 2017 - Deep Reinforcement Learning that Matters.pdf;/home/dimitri/Nextcloud/Zotero/storage/IV2G8XEY/1709.html}
}

Downloads: 0