Gradient Descent for General Reinforcement Learning. Iii, L. C B. & Moore, A. W abstract bibtex A simple learning rule is derived, the VAPS algorithm, which can be instantiated to generate a wide range of new reinforcementlearning algorithms. These algorithms solve a number of open problems, define several new approaches to reinforcement learning, and unify different approaches to reinforcement learning under a single theory. These algorithms all have guaranteed convergence, and include modifications of several existing algorithms that were known to fail to converge on simple MOPs. These include Qlearning, SARSA, and advantage learning. In addition to these value-based algorithms it also generates pure policy-search reinforcement-learning algorithms, which learn optimal policies without learning a value function. In addition, it allows policysearch and value-based algorithms to be combined, thus unifying two very different approaches to reinforcement learning into a single Value and Policy Search (V APS) algorithm. And these algorithms converge for POMDPs without requiring a proper belief state. Simulations results are given, and several areas for future research are discussed.

@article{iii_gradient_nodate,
title = {Gradient {Descent} for {General} {Reinforcement} {Learning}},
abstract = {A simple learning rule is derived, the VAPS algorithm, which can be instantiated to generate a wide range of new reinforcementlearning algorithms. These algorithms solve a number of open problems, define several new approaches to reinforcement learning, and unify different approaches to reinforcement learning under a single theory. These algorithms all have guaranteed convergence, and include modifications of several existing algorithms that were known to fail to converge on simple MOPs. These include Qlearning, SARSA, and advantage learning. In addition to these value-based algorithms it also generates pure policy-search reinforcement-learning algorithms, which learn optimal policies without learning a value function. In addition, it allows policysearch and value-based algorithms to be combined, thus unifying two very different approaches to reinforcement learning into a single Value and Policy Search (V APS) algorithm. And these algorithms converge for POMDPs without requiring a proper belief state. Simulations results are given, and several areas for future research are discussed.},
language = {en},
author = {Iii, Leemon C Baird and Moore, Andrew W},
pages = {7}
}

Downloads: 0

{"_id":"3FhrZCLpooiYfjEBr","bibbaseid":"iii-moore-gradientdescentforgeneralreinforcementlearning","authorIDs":[],"author_short":["Iii, L. C B.","Moore, A. W"],"bibdata":{"bibtype":"article","type":"article","title":"Gradient Descent for General Reinforcement Learning","abstract":"A simple learning rule is derived, the VAPS algorithm, which can be instantiated to generate a wide range of new reinforcementlearning algorithms. These algorithms solve a number of open problems, define several new approaches to reinforcement learning, and unify different approaches to reinforcement learning under a single theory. These algorithms all have guaranteed convergence, and include modifications of several existing algorithms that were known to fail to converge on simple MOPs. These include Qlearning, SARSA, and advantage learning. In addition to these value-based algorithms it also generates pure policy-search reinforcement-learning algorithms, which learn optimal policies without learning a value function. In addition, it allows policysearch and value-based algorithms to be combined, thus unifying two very different approaches to reinforcement learning into a single Value and Policy Search (V APS) algorithm. And these algorithms converge for POMDPs without requiring a proper belief state. Simulations results are given, and several areas for future research are discussed.","language":"en","author":[{"propositions":[],"lastnames":["Iii"],"firstnames":["Leemon","C","Baird"],"suffixes":[]},{"propositions":[],"lastnames":["Moore"],"firstnames":["Andrew","W"],"suffixes":[]}],"pages":"7","bibtex":"@article{iii_gradient_nodate,\n\ttitle = {Gradient {Descent} for {General} {Reinforcement} {Learning}},\n\tabstract = {A simple learning rule is derived, the VAPS algorithm, which can be instantiated to generate a wide range of new reinforcementlearning algorithms. These algorithms solve a number of open problems, define several new approaches to reinforcement learning, and unify different approaches to reinforcement learning under a single theory. These algorithms all have guaranteed convergence, and include modifications of several existing algorithms that were known to fail to converge on simple MOPs. These include Qlearning, SARSA, and advantage learning. In addition to these value-based algorithms it also generates pure policy-search reinforcement-learning algorithms, which learn optimal policies without learning a value function. In addition, it allows policysearch and value-based algorithms to be combined, thus unifying two very different approaches to reinforcement learning into a single Value and Policy Search (V APS) algorithm. And these algorithms converge for POMDPs without requiring a proper belief state. Simulations results are given, and several areas for future research are discussed.},\n\tlanguage = {en},\n\tauthor = {Iii, Leemon C Baird and Moore, Andrew W},\n\tpages = {7}\n}\n\n","author_short":["Iii, L. C B.","Moore, A. W"],"key":"iii_gradient_nodate","id":"iii_gradient_nodate","bibbaseid":"iii-moore-gradientdescentforgeneralreinforcementlearning","role":"author","urls":{},"downloads":0,"html":""},"bibtype":"article","biburl":"https://bibbase.org/zotero/asneha213","creationDate":"2019-09-27T03:20:57.329Z","downloads":0,"keywords":[],"search_terms":["gradient","descent","general","reinforcement","learning","iii","moore"],"title":"Gradient Descent for General Reinforcement Learning","year":null,"dataSources":["fjacg9txEnNSDwee6"]}