PARTNR: Pick and place Ambiguity Resolving by Trustworthy iNteractive leaRning. Luijkx, J., Ajanovic, Z., Ferranti, L., & Kober, J. In NeurIPS Workshop on Robot Learning, November, 2022.
PARTNR: Pick and place Ambiguity Resolving by Trustworthy iNteractive leaRning [link]Paper  PARTNR: Pick and place Ambiguity Resolving by Trustworthy iNteractive leaRning [link]Video  PARTNR: Pick and place Ambiguity Resolving by Trustworthy iNteractive leaRning [link]Website  abstract   bibtex   
Several recent works show impressive results in mapping language-based human commands and image scene observations to direct robot executable policies (e.g., pick and place poses). However, these approaches do not consider the uncertainty of the trained policy and simply always execute actions suggested by the current policy as the most probable ones. This makes them vulnerable to domain shift and inefficient in the number of required demonstrations. We extend previous works and present the PARTNR algorithm that can detect ambiguities in the trained policy by analyzing multiple modalities in the pick and place poses using topological analysis. PARTNR employs an adaptive, sensitivity-based, gating function that decides if additional user demonstrations are required. User demonstrations are aggregated to the dataset and used for subsequent training. In this way, the policy can adapt promptly to domain shift and it can minimize the number of required demonstrations for a well-trained policy. The adaptive threshold enables to achieve the user-acceptable level of ambiguity to execute the policy autonomously and in turn, increase the trustworthiness of our system. We demonstrate the performance of PARTNR in a table-top pick and place task.
@inproceedings{luijkx_j_partnr_2022,
	title = {{PARTNR}: {Pick} and place {Ambiguity} {Resolving} by {Trustworthy} {iNteractive} {leaRning}},
	url = {paper=https://arxiv.org/pdf/2211.08304.pdf, video=https://www.youtube.com/watch?v=q8S2Ua41Lik, website=https://partnr-learn.github.io/},
	abstract = {Several recent works show impressive results in mapping language-based human
commands and image scene observations to direct robot executable policies (e.g.,
pick and place poses). However, these approaches do not consider the uncertainty
of the trained policy and simply always execute actions suggested by the current
policy as the most probable ones. This makes them vulnerable to domain shift and
inefficient in the number of required demonstrations. We extend previous works
and present the PARTNR algorithm that can detect ambiguities in the trained policy
by analyzing multiple modalities in the pick and place poses using topological
analysis. PARTNR employs an adaptive, sensitivity-based, gating function that
decides if additional user demonstrations are required. User demonstrations are
aggregated to the dataset and used for subsequent training. In this way, the policy
can adapt promptly to domain shift and it can minimize the number of required
demonstrations for a well-trained policy. The adaptive threshold enables to achieve
the user-acceptable level of ambiguity to execute the policy autonomously and in
turn, increase the trustworthiness of our system. We demonstrate the performance
of PARTNR in a table-top pick and place task.},
	booktitle = {{NeurIPS} {Workshop} on {Robot} {Learning}},
	author = {{Luijkx, J.} and {Ajanovic, Z.} and {Ferranti, L.} and {Kober, J.}},
	month = nov,
	year = {2022},
	keywords = {key\_manipulator, key\_robot\_learning},
}

Downloads: 0