Indefinite-Horizon Reachability in Goal-DEC-POMDPs. Chatterjee, K. & Chmelík, M. In Paper abstract bibtex DEC-POMDPs extend POMDPs to a multi-agent setting, where several agents operate in an uncertain environment independently to achieve a joint objective. DEC-POMDPs have been studied with finite-horizon and infinite-horizon discounted-sum objectives, and there exist solvers both for exact and approximate solutions. In this work we consider Goal-DEC-POMDPs, where given a set of target states, the objective is to ensure that the target set is reached with minimal cost. We consider the indefinite-horizon (infinite-horizon with either discounted-sum, or undiscounted-sum, where absorbing goal states have zero-cost) problem. We present a new and novel method to solve the problem that extends methods for finite-horizon DEC-POMDPs and the RTDP-Bel approach for POMDPs. We present experimental results on several examples, and show that our approach presents promising results.
@inproceedings {icaps16-33,
track = {Robotics Track},
title = {Indefinite-Horizon Reachability in Goal-DEC-POMDPs},
url = {http://www.aaai.org/ocs/index.php/ICAPS/ICAPS16/paper/view/12999},
author = {Krishnendu Chatterjee and Martin Chmelík},
abstract = {DEC-POMDPs extend POMDPs to a multi-agent setting, where several agents operate in an uncertain environment independently to achieve a joint objective. DEC-POMDPs have been studied with finite-horizon and infinite-horizon discounted-sum objectives, and there exist solvers both for exact and approximate solutions.
In this work we consider Goal-DEC-POMDPs, where given a set of target states, the objective is to ensure that the target set is reached with minimal cost.
We consider the indefinite-horizon (infinite-horizon with either discounted-sum, or undiscounted-sum, where absorbing goal states have zero-cost) problem.
We present a new and novel method to solve the problem that extends methods for finite-horizon DEC-POMDPs and the RTDP-Bel approach for POMDPs.
We present experimental results on several examples, and show that our approach presents promising results.},
keywords = {formal methods for robot planning and control,planning and coordination methods for multiple robots}
}
Downloads: 0
{"_id":"kvfLSHBBbsBs8XtMu","bibbaseid":"chatterjee-chmelk-indefinitehorizonreachabilityingoaldecpomdps","downloads":0,"creationDate":"2016-06-09T01:21:35.136Z","title":"Indefinite-Horizon Reachability in Goal-DEC-POMDPs","author_short":["Chatterjee, K.","Chmelík, M."],"year":null,"bibtype":"inproceedings","biburl":"icaps16.icaps-conference.org/papers.bib","bibdata":{"bibtype":"inproceedings","type":"inproceedings","track":"Robotics Track","title":"Indefinite-Horizon Reachability in Goal-DEC-POMDPs","url":"http://www.aaai.org/ocs/index.php/ICAPS/ICAPS16/paper/view/12999","author":[{"firstnames":["Krishnendu"],"propositions":[],"lastnames":["Chatterjee"],"suffixes":[]},{"firstnames":["Martin"],"propositions":[],"lastnames":["Chmelík"],"suffixes":[]}],"abstract":"DEC-POMDPs extend POMDPs to a multi-agent setting, where several agents operate in an uncertain environment independently to achieve a joint objective. DEC-POMDPs have been studied with finite-horizon and infinite-horizon discounted-sum objectives, and there exist solvers both for exact and approximate solutions. In this work we consider Goal-DEC-POMDPs, where given a set of target states, the objective is to ensure that the target set is reached with minimal cost. We consider the indefinite-horizon (infinite-horizon with either discounted-sum, or undiscounted-sum, where absorbing goal states have zero-cost) problem. We present a new and novel method to solve the problem that extends methods for finite-horizon DEC-POMDPs and the RTDP-Bel approach for POMDPs. We present experimental results on several examples, and show that our approach presents promising results.","keywords":"formal methods for robot planning and control,planning and coordination methods for multiple robots","bibtex":"@inproceedings {icaps16-33,\r\n track = {Robotics Track},\r\n title = {Indefinite-Horizon Reachability in Goal-DEC-POMDPs},\r\n url = {http://www.aaai.org/ocs/index.php/ICAPS/ICAPS16/paper/view/12999},\r\n author = {Krishnendu Chatterjee and Martin Chmelík},\r\n abstract = {DEC-POMDPs extend POMDPs to a multi-agent setting, where several agents operate in an uncertain environment independently to achieve a joint objective. DEC-POMDPs have been studied with finite-horizon and infinite-horizon discounted-sum objectives, and there exist solvers both for exact and approximate solutions. \r\nIn this work we consider Goal-DEC-POMDPs, where given a set of target states, the objective is to ensure that the target set is reached with minimal cost.\r\nWe consider the indefinite-horizon (infinite-horizon with either discounted-sum, or undiscounted-sum, where absorbing goal states have zero-cost) problem. \r\nWe present a new and novel method to solve the problem that extends methods for finite-horizon DEC-POMDPs and the RTDP-Bel approach for POMDPs. \r\nWe present experimental results on several examples, and show that our approach presents promising results.},\r\n keywords = {formal methods for robot planning and control,planning and coordination methods for multiple robots}\r\n}\r\n\r\n","author_short":["Chatterjee, K.","Chmelík, M."],"key":"icaps16-33","id":"icaps16-33","bibbaseid":"chatterjee-chmelk-indefinitehorizonreachabilityingoaldecpomdps","role":"author","urls":{"Paper":"http://www.aaai.org/ocs/index.php/ICAPS/ICAPS16/paper/view/12999"},"keyword":["formal methods for robot planning and control","planning and coordination methods for multiple robots"],"metadata":{"authorlinks":{}},"downloads":0,"html":""},"search_terms":["indefinite","horizon","reachability","goal","dec","pomdps","chatterjee","chmelík"],"keywords":["formal methods for robot planning and control","planning and coordination methods for multiple robots"],"authorIDs":[],"dataSources":["iMkx859KiXcegwsin","EZtZjCTnxcdTTyeij"]}