Designing Neural Network Architectures using Reinforcement Learning. Baker, B., Gupta, O., Naik, N., & Raskar, R. arXiv:1611.02167 [cs], March, 2017. arXiv: 1611.02167Paper abstract bibtex At present, designing convolutional neural network (CNN) architectures requires both human expertise and labor. New architectures are handcrafted by careful experimentation or modified from a handful of existing networks. We introduce MetaQNN, a meta-modeling algorithm based on reinforcement learning to automatically generate high-performing CNN architectures for a given learning task. The learning agent is trained to sequentially choose CNN layers using Qlearning with an -greedy exploration strategy and experience replay. The agent explores a large but finite space of possible architectures and iteratively discovers designs with improved performance on the learning task. On image classification benchmarks, the agent-designed networks (consisting of only standard convolution, pooling, and fully-connected layers) beat existing networks designed with the same layer types and are competitive against the state-of-the-art methods that use more complex layer types. We also outperform existing meta-modeling approaches for network design on image classification tasks.
@article{baker_designing_2017,
title = {Designing {Neural} {Network} {Architectures} using {Reinforcement} {Learning}},
url = {http://arxiv.org/abs/1611.02167},
abstract = {At present, designing convolutional neural network (CNN) architectures requires both human expertise and labor. New architectures are handcrafted by careful experimentation or modified from a handful of existing networks. We introduce MetaQNN, a meta-modeling algorithm based on reinforcement learning to automatically generate high-performing CNN architectures for a given learning task. The learning agent is trained to sequentially choose CNN layers using Qlearning with an -greedy exploration strategy and experience replay. The agent explores a large but finite space of possible architectures and iteratively discovers designs with improved performance on the learning task. On image classification benchmarks, the agent-designed networks (consisting of only standard convolution, pooling, and fully-connected layers) beat existing networks designed with the same layer types and are competitive against the state-of-the-art methods that use more complex layer types. We also outperform existing meta-modeling approaches for network design on image classification tasks.},
language = {en},
urldate = {2019-11-07},
journal = {arXiv:1611.02167 [cs]},
author = {Baker, Bowen and Gupta, Otkrist and Naik, Nikhil and Raskar, Ramesh},
month = mar,
year = {2017},
note = {arXiv: 1611.02167},
keywords = {Computer Science - Machine Learning}
}
Downloads: 0
{"_id":"ZLYXgwYcn4AAk29F5","bibbaseid":"baker-gupta-naik-raskar-designingneuralnetworkarchitecturesusingreinforcementlearning-2017","authorIDs":[],"author_short":["Baker, B.","Gupta, O.","Naik, N.","Raskar, R."],"bibdata":{"bibtype":"article","type":"article","title":"Designing Neural Network Architectures using Reinforcement Learning","url":"http://arxiv.org/abs/1611.02167","abstract":"At present, designing convolutional neural network (CNN) architectures requires both human expertise and labor. New architectures are handcrafted by careful experimentation or modified from a handful of existing networks. We introduce MetaQNN, a meta-modeling algorithm based on reinforcement learning to automatically generate high-performing CNN architectures for a given learning task. The learning agent is trained to sequentially choose CNN layers using Qlearning with an -greedy exploration strategy and experience replay. The agent explores a large but finite space of possible architectures and iteratively discovers designs with improved performance on the learning task. On image classification benchmarks, the agent-designed networks (consisting of only standard convolution, pooling, and fully-connected layers) beat existing networks designed with the same layer types and are competitive against the state-of-the-art methods that use more complex layer types. We also outperform existing meta-modeling approaches for network design on image classification tasks.","language":"en","urldate":"2019-11-07","journal":"arXiv:1611.02167 [cs]","author":[{"propositions":[],"lastnames":["Baker"],"firstnames":["Bowen"],"suffixes":[]},{"propositions":[],"lastnames":["Gupta"],"firstnames":["Otkrist"],"suffixes":[]},{"propositions":[],"lastnames":["Naik"],"firstnames":["Nikhil"],"suffixes":[]},{"propositions":[],"lastnames":["Raskar"],"firstnames":["Ramesh"],"suffixes":[]}],"month":"March","year":"2017","note":"arXiv: 1611.02167","keywords":"Computer Science - Machine Learning","bibtex":"@article{baker_designing_2017,\n\ttitle = {Designing {Neural} {Network} {Architectures} using {Reinforcement} {Learning}},\n\turl = {http://arxiv.org/abs/1611.02167},\n\tabstract = {At present, designing convolutional neural network (CNN) architectures requires both human expertise and labor. New architectures are handcrafted by careful experimentation or modified from a handful of existing networks. We introduce MetaQNN, a meta-modeling algorithm based on reinforcement learning to automatically generate high-performing CNN architectures for a given learning task. The learning agent is trained to sequentially choose CNN layers using Qlearning with an -greedy exploration strategy and experience replay. The agent explores a large but finite space of possible architectures and iteratively discovers designs with improved performance on the learning task. On image classification benchmarks, the agent-designed networks (consisting of only standard convolution, pooling, and fully-connected layers) beat existing networks designed with the same layer types and are competitive against the state-of-the-art methods that use more complex layer types. We also outperform existing meta-modeling approaches for network design on image classification tasks.},\n\tlanguage = {en},\n\turldate = {2019-11-07},\n\tjournal = {arXiv:1611.02167 [cs]},\n\tauthor = {Baker, Bowen and Gupta, Otkrist and Naik, Nikhil and Raskar, Ramesh},\n\tmonth = mar,\n\tyear = {2017},\n\tnote = {arXiv: 1611.02167},\n\tkeywords = {Computer Science - Machine Learning}\n}\n\n","author_short":["Baker, B.","Gupta, O.","Naik, N.","Raskar, R."],"key":"baker_designing_2017","id":"baker_designing_2017","bibbaseid":"baker-gupta-naik-raskar-designingneuralnetworkarchitecturesusingreinforcementlearning-2017","role":"author","urls":{"Paper":"http://arxiv.org/abs/1611.02167"},"keyword":["Computer Science - Machine Learning"],"downloads":0},"bibtype":"article","biburl":"https://bibbase.org/zotero/carlosgogo","creationDate":"2019-12-13T10:16:40.658Z","downloads":0,"keywords":["computer science - machine learning"],"search_terms":["designing","neural","network","architectures","using","reinforcement","learning","baker","gupta","naik","raskar"],"title":"Designing Neural Network Architectures using Reinforcement Learning","year":2017,"dataSources":["8KsDsKJXTPZFnaHDn"]}