Robotic Weight-based Object Relocation in Clutter via Tree-based Q-learning Approach using Breadth and Depth Search Techniques. Golluccio, G., DI VITO, D., Marino, A., & Antonelli, G. 2021.
Robotic Weight-based Object Relocation in Clutter via Tree-based Q-learning Approach using Breadth and Depth Search Techniques [link]Paper  doi  abstract   bibtex   
In this paper, the problem of retrieving a target object from a cluttered environment through a mobile manipulator is considered. The task is solved by combining Task and Motion Planning; in detail, at a higher level, the task planner is in charge of planning the sequence of objects to relocate while, at a lower level, the motion planner is in charge of planning the robot movements taking into consideration robot and environment constraints. In particular, the latter provides feedback to the former about the feasibility of object sequences; this information is exploited to train a Reinforcement Learning agent that, according to an objects-weight based metrics, builds a dynamic decision-tree where each node represents a sequence of relocated objects, and edge values are weights updated via Q-learning-inspired algorithm. Three learning strategies differing in how the tree is explored are analysed. Moreover, each exploration approach is performed using two different tree-search methods: the Breadth first and Depth first techniques. Finally, the proposed learning strategies are numerically validated and compared in three scenarios of growing-complexity.
@conference{
	11580_92258,
	author = {Golluccio, Giacomo and DI VITO, Daniele and Marino, Alessandro and Antonelli, Gianluca},
	title = {Robotic Weight-based Object Relocation in Clutter via Tree-based Q-learning Approach using Breadth and Depth Search Techniques},
	year = {2021},
	publisher = {IEEE},
	booktitle = {2021 20th International Conference on Advanced Robotics (ICAR)},
	abstract = {In this paper, the problem of retrieving a target object from a cluttered environment through a mobile manipulator is considered. The task is solved by combining Task and Motion Planning; in detail, at a higher level, the task planner is in charge of planning the sequence of objects to relocate while, at a lower level, the motion planner is in charge of planning the robot movements taking into consideration robot and environment constraints. In particular, the latter provides feedback to the former about the feasibility of object sequences; this information is exploited to train a Reinforcement Learning agent that, according to an objects-weight based metrics, builds a dynamic decision-tree where each node represents a sequence of relocated objects, and edge values are weights updated via Q-learning-inspired algorithm. Three learning strategies differing in how the tree is explored are analysed. Moreover, each exploration approach is performed using two different tree-search methods: the Breadth first and Depth first techniques. Finally, the proposed learning strategies are numerically validated and compared in three scenarios of growing-complexity.},
	keywords = {Measurement
Q-learning
Heuristic algorithms
Search problems
Manipulators
Learning (Artificial Intelligence)
Mobile robots
Path planning
Tree searching
Planning
Trajectory},
	url = {https://ieeexplore.ieee.org/abstract/document/9659471},
	doi = {10.1109/ICAR53236.2021.9659471},
	isbn = {978-1-6654-3684-7},	
	pages = {676--681}
}

Downloads: 0