State of the Art Control of Atari Games Using Shallow Reinforcement Learning. Liang, Y., Machado, M. C, Talvitie, E., & Bowling, M. arXiv.org, cs.LG, 2015.
abstract   bibtex   
The recently introduced Deep Q-Networks (DQN) algorithm has gained attention as one of the first successful combinations of deep neural networks and reinforcement learning. Its promise was demonstrated in the Arcade Learning Environment (ALE), a challenging framework composed of dozens of Atari 2600 games used to evaluate general competency in AI. It achieved dramatically better results than earlier approaches, showing that its ability to learn good representations is quite robust and general. This paper attempts to understand the principles that underly DQN's impressive performance and to better contextualize its success. We systematically evaluate the importance of key representational biases encoded by DQN's network by proposing simple linear representations that make use of these concepts. Incorporating these characteristics, we obtain a computationally practical feature set that achieves competitive performance to DQN in the ALE. Besides offering insight into the strengths and weaknesses of DQN, we provide a generic representation for the ALE, significantly reducing the burden of learning a representation for each game. Moreover, we also provide a simple, reproducible benchmark for the sake of comparison to future work in the ALE.
@Article{Liang2015,
author = {Liang, Yitao and Machado, Marlos C and Talvitie, Erik and Bowling, Michael}, 
title = {State of the Art Control of Atari Games Using Shallow Reinforcement Learning}, 
journal = {arXiv.org}, 
volume = {cs.LG}, 
number = {}, 
pages = {}, 
year = {2015}, 
abstract = {The recently introduced Deep Q-Networks (DQN) algorithm has gained attention as one of the first successful combinations of deep neural networks and reinforcement learning. Its promise was demonstrated in the Arcade Learning Environment (ALE), a challenging framework composed of dozens of Atari 2600 games used to evaluate general competency in AI. It achieved dramatically better results than earlier approaches, showing that its ability to learn good representations is quite robust and general. This paper attempts to understand the principles that underly DQN\'s impressive performance and to better contextualize its success. We systematically evaluate the importance of key representational biases encoded by DQN\'s network by proposing simple linear representations that make use of these concepts. Incorporating these characteristics, we obtain a computationally practical feature set that achieves competitive performance to DQN in the ALE. Besides offering insight into the strengths and weaknesses of DQN, we provide a generic representation for the ALE, significantly reducing the burden of learning a representation for each game. Moreover, we also provide a simple, reproducible benchmark for the sake of comparison to future work in the ALE.}, 
location = {}, 
keywords = {}}

Downloads: 0