Feedback Motion Planning Approach for Nonlinear Control using Gain Scheduled RRTs. Maeda, G. J., Singh, S. P. N., & Durrant-Whyte, H. 2010. doi abstract bibtex A new control strategy based on feedback motion planning is presented for solving nonlinear control problems in constrained environments. The algorithm explores the state-space using a bi-directional rapidly exploring random tree (biRRT) in order to find a feasible trajectory between an initial and goal state. By incrementally scheduling LQR controllers, it attempts to connect states so as to link the two trees. These attempts are evaluated by verifying that the connected state is inside the controllable area of an infinite time horizon controller at the goal. This allows for a rapid delineation of equivalent neighborhoods in the state-space. As a result, random exploration is terminated as soon as a feasible solution is made possible by feedback means, avoiding oversampling and partially introducing optimal actions at the neighborhood of the connection. The algorithm is demonstrated and compared against a biRRT using single-link pendulum and cart-pole swing-up tasks amongst obstacles, the latter showing a nearly order of magnitude more efficient search.
@CONFERENCE{Maeda2010,
author = {Guilherme J. Maeda and Surya P. N. Singh and Hugh Durrant-Whyte},
title = {Feedback Motion Planning Approach for Nonlinear Control using Gain
Scheduled RRTs},
booktitle = {Proceedings of the International Conference on Intelligent Robots
and Systems ({IROS})},
year = {2010},
pages = {119--126},
abstract = {A new control strategy based on feedback motion planning is presented
for solving nonlinear control problems in constrained environments.
The algorithm explores the state-space using a bi-directional rapidly
exploring random tree (biRRT) in order to find a feasible trajectory
between an initial and goal state. By incrementally scheduling LQR
controllers, it attempts to connect states so as to link the two
trees. These attempts are evaluated by verifying that the connected
state is inside the controllable area of an infinite time horizon
controller at the goal. This allows for a rapid delineation of equivalent
neighborhoods in the state-space. As a result, random exploration
is terminated as soon as a feasible solution is made possible by
feedback means, avoiding oversampling and partially introducing optimal
actions at the neighborhood of the connection. The algorithm is demonstrated
and compared against a biRRT using single-link pendulum and cart-pole
swing-up tasks amongst obstacles, the latter showing a nearly order
of magnitude more efficient search.},
doi = {10.1109/IROS.2010.5650634},
pdf = {iros2010.feedbackmp.pdf}
}
Downloads: 0
{"_id":{"_str":"51fd4b4ac5b22c38760016e4"},"__v":34,"authorIDs":["25JLGLMgHDQdFcbEd","2pNN7j4TWs2sdPq2b","34dFKNX3cFW2RkuZi","35Xv7osbm3zXk62F8","3SKHrAB8wMRysGFy8","47GNnFb6EPKma3Srt","48H54WhqNYjnD95ui","4SntEAiXntKARkNgS","5462d4748a9aab071c0005c7","5T75juAmsAB4z6bD6","5WvQsNxqBRyW2RSQi","5e34214741f782de01000022","5e3426cb41f782de01000084","5e3429b441f782de010000b9","5e34312341f782de0100013e","5e34390b41f782de010001e6","5e34392441f782de010001eb","5e6b0e1e86bf9cde0100000e","5sfq2WJ4W5g8mskFe","6a8JHmpJfRqSEQkqb","6hGsP8HLkwinYXigD","6zbapqaBfYNicQRXv","7CkZJwcgcj75uyjun","7DA6BoSFi7tnffK8H","7WRSvEmwiLfubqe8f","8HnjdunXqJG4QDLAd","8ntKbqqhqJyvrfK6r","9zqTiopbnCKBBJEHw","B7m7NjLbfonZHkc6g","BifZh9Ad6N9fnehRi","CL6tPXijZ3hpfr78G","CmEkRJvYX5Tr4Pnfk","DExNFWbRJaceGfGum","DY3kZm8P2JayQgduy","EyvXqoGdM2qS93CN7","Fdd27HCHQMmQXfGLs","GqK42ji4y7PtPXZ3F","Gy3fvTPGQkczs42ez","HLcHzprYn3r3Xe84b","HdmdEWKvJektnYR7y","Hj7KYhXTskqMzhg7u","HzDPAyHXF2wTA7zGu","JHBPkDMYpmfwpcT8A","JMwRZPisoDhWDzWhm","K37JaQLteDSHdqyjx","K7AXubjaKLhA3pNTA","KqPDgQxE6zxpxX67e","LRjJSNBWHuWanhqNM","LjXJLGxM3QELAoaNP","Mc9S7iNhTFJWsRkeb","Mww7bu4yZkujgW5ns","NAGpXYkKmHQ29Mvjb","NKMyNZac6Y89jH6Qr","PZGS54s4e74ZKchZy","PcR3YBj9m6B3nvjG4","Q7eQ2ycHNYkfAF6nP","QR4SjtXbj8siaJg4K","QReRjpmwvPkeefcfK","QkREvvTHfouEvisjF","Rg9JW68PMYmedw2jv","RxHhvDjoid9t5F7uJ","Sju6B3BiCKw3CtwpM","TYZYfD6XShucnrAkf","XviLSraASaqofFM4Z","XwzuwxPG3rMWe5q7S","XzKRCGm3o924pvPcu","YHiJHk6hwmzPyR4nL","ZezE89A6WSniSjNvd","ZtbB3GZNz96JBnnL6","aFQJQukES2Lw58sS5","aPafrdEuKAitYSfiA","afAXjw75m99Km4Fgj","afwmjw9TTkZfveNzE","bNN2E3KLq5WFdzKTF","cfwHoA5cnQ6eGD4bg","dDWF4crzWnW8RyG9p","dFqdXn6WnDTzTnEqQ","dLkDhYcDM2PSSEZk3","dmJYK8FNXWAAu5G7T","dvJ37AfDNYzAuD9PK","eiYowsdjQ9Mcw2Thz","evxzRHu7G7fbeoibY","fid77jGJy9yzcHJr2","fuEkxmax5jFbtxCYy","gAmxy4SvWarbPBeTX","gFrSDrvd4JrFP9kTP","gXJZBJPrZddgZ7YoH","ggsiWBRuBugZgRE8f","gqQQ75hZq9ros28RR","h3u9YL2Ae2DNrQCqi","hiXcvvFucmwxJEZ9y","hth6XiDc9jGYzRe5H","i2DexC5F5rxsC5PHg","iELGk3WWqbGkdYzPc","ijX6hDPyFBq7xgoma","ip7PgT6GfbdZreojv","is7DZGXrEJDXNvotf","j6o4kAC79ZjvEeijw","jJbSbk2LoomLaPG7h","kTAMCAkfLFo4f89Gp","kXmyRRzRyEi5cz83e","kz4FwssxjbAeByCEF","nRtwjECYR2WAxoeFJ","oDxo3ZMAB4phpqr5o","p64zTgE7XqWDjvB7v","pHCFNdiRdm4dHD4JE","pJd6uJtiZnuhNGMpH","paBtjfzxNGmPkJgFy","pu6XsJacuRdqyidu3","qLFpKhtC29Rk9jeS8","rBLmXAceeFPWAntch","sHdkaG8cdJP4GriPy","tsgNGCAcyhTm8BeQB","u3sfcQ8RovYYpKjLb","uGrPMtPgNJq2bBX47","uHkfdT8WqEWJyDjRg","ucoBD8M4z8ezYE8bc","upFqY33vPBba6wxh8","v9z9AicS8PCAymgiR","vAQtv8TCRg4oNqGSn","vDknP8BNi6WY35SDp","vQekzHzAimvgtFT3G","vg9Go2zuYX2Xq8HJf","vve7zKuHuKZmDZ6FB","w4erytKzHZZj3bnbZ","wmuqnoGDizWJe8P2H","xGz64z5uDaXxM3ZHF","xSFFddhsMR36EEcYu","y52p2nvza87XinHws","yBp5Pq56pbsjD6f8H","z6nH83SNs79e3uH2u","zgA7uYAasRk8ezQ9e","zj2qr55FtWLxKhX9E","zsFBk8a7A7dEouo2u"],"author_short":["Maeda, G. J.","Singh, S. P. N.","Durrant-Whyte, H."],"bibbaseid":"maeda-singh-durrantwhyte-feedbackmotionplanningapproachfornonlinearcontrolusinggainscheduledrrts-2010","bibdata":{"bibtype":"conference","type":"conference","author":[{"firstnames":["Guilherme","J."],"propositions":[],"lastnames":["Maeda"],"suffixes":[]},{"firstnames":["Surya","P.","N."],"propositions":[],"lastnames":["Singh"],"suffixes":[]},{"firstnames":["Hugh"],"propositions":[],"lastnames":["Durrant-Whyte"],"suffixes":[]}],"title":"Feedback Motion Planning Approach for Nonlinear Control using Gain Scheduled RRTs","booktitle":"Proceedings of the International Conference on Intelligent Robots and Systems (IROS)","year":"2010","pages":"119–126","abstract":"A new control strategy based on feedback motion planning is presented for solving nonlinear control problems in constrained environments. The algorithm explores the state-space using a bi-directional rapidly exploring random tree (biRRT) in order to find a feasible trajectory between an initial and goal state. By incrementally scheduling LQR controllers, it attempts to connect states so as to link the two trees. These attempts are evaluated by verifying that the connected state is inside the controllable area of an infinite time horizon controller at the goal. This allows for a rapid delineation of equivalent neighborhoods in the state-space. As a result, random exploration is terminated as soon as a feasible solution is made possible by feedback means, avoiding oversampling and partially introducing optimal actions at the neighborhood of the connection. The algorithm is demonstrated and compared against a biRRT using single-link pendulum and cart-pole swing-up tasks amongst obstacles, the latter showing a nearly order of magnitude more efficient search.","doi":"10.1109/IROS.2010.5650634","pdf":"iros2010.feedbackmp.pdf","bibtex":"@CONFERENCE{Maeda2010,\r\n author = {Guilherme J. Maeda and Surya P. N. Singh and Hugh Durrant-Whyte},\r\n title = {Feedback Motion Planning Approach for Nonlinear Control using Gain\r\n\tScheduled RRTs},\r\n booktitle = {Proceedings of the International Conference on Intelligent Robots\r\n\tand Systems ({IROS})},\r\n year = {2010},\r\n pages = {119--126},\r\n abstract = {A new control strategy based on feedback motion planning is presented\r\n\tfor solving nonlinear control problems in constrained environments.\r\n\tThe algorithm explores the state-space using a bi-directional rapidly\r\n\texploring random tree (biRRT) in order to find a feasible trajectory\r\n\tbetween an initial and goal state. By incrementally scheduling LQR\r\n\tcontrollers, it attempts to connect states so as to link the two\r\n\ttrees. These attempts are evaluated by verifying that the connected\r\n\tstate is inside the controllable area of an infinite time horizon\r\n\tcontroller at the goal. This allows for a rapid delineation of equivalent\r\n\tneighborhoods in the state-space. As a result, random exploration\r\n\tis terminated as soon as a feasible solution is made possible by\r\n\tfeedback means, avoiding oversampling and partially introducing optimal\r\n\tactions at the neighborhood of the connection. The algorithm is demonstrated\r\n\tand compared against a biRRT using single-link pendulum and cart-pole\r\n\tswing-up tasks amongst obstacles, the latter showing a nearly order\r\n\tof magnitude more efficient search.},\r\n doi = {10.1109/IROS.2010.5650634},\r\n pdf = {iros2010.feedbackmp.pdf}\r\n}\r\n\r\n","author_short":["Maeda, G. J.","Singh, S. P. N.","Durrant-Whyte, H."],"key":"Maeda2010","id":"Maeda2010","bibbaseid":"maeda-singh-durrantwhyte-feedbackmotionplanningapproachfornonlinearcontrolusinggainscheduledrrts-2010","role":"author","urls":{},"downloads":0,"html":""},"bibtype":"conference","biburl":"http://robotics.itee.uq.edu.au/~spns/pubcache/SpnS_PubList.bib","downloads":0,"keywords":[],"search_terms":["feedback","motion","planning","approach","nonlinear","control","using","gain","scheduled","rrts","maeda","singh","durrant-whyte"],"title":"Feedback Motion Planning Approach for Nonlinear Control using Gain Scheduled RRTs","title_words":["feedback","motion","planning","approach","nonlinear","control","using","gain","scheduled","rrts"],"year":2010,"dataSources":["zNCf6MTxXnpkNN9Zz"]}