Reward Learning from Human Preferences and Demonstrations in Atari. Ibarz, B., Leike, J., Pohlen, T., Irving, G., Legg, S., & Amodei, D.
Reward Learning from Human Preferences and Demonstrations in Atari [link]Paper  abstract   bibtex   
To solve complex real-world problems with reinforcement learning, we cannot rely on manually specified reward functions. Instead, we can have humans communicate an objective to the agent directly. In this work, we combine two approaches to learning from human feedback: expert demonstrations and trajectory preferences. We train a deep neural network to model the reward function and use its predicted reward to train an DQN-based deep reinforcement learning agent on 9 Atari games. Our approach beats the imitation learning baseline in 7 games and achieves strictly superhuman performance on 2 games without using game rewards. Additionally, we investigate the goodness of fit of the reward model, present some reward hacking problems, and study the effects of noise in the human labels.
@article{ibarzRewardLearningHuman2018,
  archivePrefix = {arXiv},
  eprinttype = {arxiv},
  eprint = {1811.06521},
  primaryClass = {cs, stat},
  title = {Reward Learning from Human Preferences and Demonstrations in {{Atari}}},
  url = {http://arxiv.org/abs/1811.06521},
  abstract = {To solve complex real-world problems with reinforcement learning, we cannot rely on manually specified reward functions. Instead, we can have humans communicate an objective to the agent directly. In this work, we combine two approaches to learning from human feedback: expert demonstrations and trajectory preferences. We train a deep neural network to model the reward function and use its predicted reward to train an DQN-based deep reinforcement learning agent on 9 Atari games. Our approach beats the imitation learning baseline in 7 games and achieves strictly superhuman performance on 2 games without using game rewards. Additionally, we investigate the goodness of fit of the reward model, present some reward hacking problems, and study the effects of noise in the human labels.},
  urldate = {2019-01-15},
  date = {2018-11-15},
  keywords = {Statistics - Machine Learning,Computer Science - Artificial Intelligence,Computer Science - Machine Learning,Computer Science - Neural and Evolutionary Computing},
  author = {Ibarz, Borja and Leike, Jan and Pohlen, Tobias and Irving, Geoffrey and Legg, Shane and Amodei, Dario},
  file = {/home/dimitri/Nextcloud/Zotero/storage/FDNA6ENP/Ibarz et al. - 2018 - Reward learning from human preferences and demonst.pdf;/home/dimitri/Nextcloud/Zotero/storage/KLGWEAMC/1811.html}
}

Downloads: 0