Scalable Agent Alignment via Reward Modeling: A Research Direction. Leike, J., Krueger, D., Everitt, T., Martic, M., Maini, V., & Legg, S.
Scalable Agent Alignment via Reward Modeling: A Research Direction [link]Paper  abstract   bibtex   
One obstacle to applying reinforcement learning algorithms to real-world problems is the lack of suitable reward functions. Designing such reward functions is difficult in part because the user only has an implicit understanding of the task objective. This gives rise to the agent alignment problem: how do we create agents that behave in accordance with the user's intentions? We outline a high-level research direction to solve the agent alignment problem centered around reward modeling: learning a reward function from interaction with the user and optimizing the learned reward function with reinforcement learning. We discuss the key challenges we expect to face when scaling reward modeling to complex and general domains, concrete approaches to mitigate these challenges, and ways to establish trust in the resulting agents.
@article{leikeScalableAgentAlignment2018,
  archivePrefix = {arXiv},
  eprinttype = {arxiv},
  eprint = {1811.07871},
  primaryClass = {cs, stat},
  title = {Scalable Agent Alignment via Reward Modeling: A Research Direction},
  url = {http://arxiv.org/abs/1811.07871},
  shorttitle = {Scalable Agent Alignment via Reward Modeling},
  abstract = {One obstacle to applying reinforcement learning algorithms to real-world problems is the lack of suitable reward functions. Designing such reward functions is difficult in part because the user only has an implicit understanding of the task objective. This gives rise to the agent alignment problem: how do we create agents that behave in accordance with the user's intentions? We outline a high-level research direction to solve the agent alignment problem centered around reward modeling: learning a reward function from interaction with the user and optimizing the learned reward function with reinforcement learning. We discuss the key challenges we expect to face when scaling reward modeling to complex and general domains, concrete approaches to mitigate these challenges, and ways to establish trust in the resulting agents.},
  urldate = {2019-01-18},
  date = {2018-11-19},
  keywords = {Statistics - Machine Learning,Computer Science - Artificial Intelligence,Computer Science - Machine Learning,Computer Science - Neural and Evolutionary Computing},
  author = {Leike, Jan and Krueger, David and Everitt, Tom and Martic, Miljan and Maini, Vishal and Legg, Shane},
  file = {/home/dimitri/Nextcloud/Zotero/storage/LA4VMIPH/Leike et al. - 2018 - Scalable agent alignment via reward modeling a re.pdf;/home/dimitri/Nextcloud/Zotero/storage/D5SRETKG/1811.html}
}

Downloads: 0