From perception to action and vice versa: A new architecture showing how perception and action can modulate each other simultaneously. Palomino, A. J., Garcia-Olaya, A., Fernandez, F., & Bandera, J. P. In 2013 European Conference on Mobile Robots, ECMR 2013, pages 268–273, 2013.
From perception to action and vice versa: A new architecture showing how perception and action can modulate each other simultaneously [link]Paper  doi  abstract   bibtex   
Artificial vision systems can not process all the information that they receive from the world in real time because it is highly expensive and inefficient in terms of computational cost. However, inspired by biological perception systems, it is possible to develop an artificial attention model able to select only the relevant part of the scene, as human vision does. From the Automated Planning point of view, a relevant area can be seen as an area where the objects involved in the execution of a plan are located. Thus, the planning system should guide the attention model to track relevant objects. But, at the same time, the perceived objects may constrain or provide new information that could suggest the modification of a current plan. Therefore, a plan that is being executed should be adapted or recomputed taking into account actual information perceived from the world. In this work, we introduce an architecture that creates a symbiosis between the planning and the attention modules of a robotic system, linking visual features with high level behaviours. The architecture is based on the interaction of an oversubscription planner, that produces plans constrained by the information perceived from the vision system, and an object-based attention system, able to focus on the relevant objects of the plan being executed. © 2013 IEEE.
@inproceedings{Palomino2013,
abstract = {Artificial vision systems can not process all the information that they receive from the world in real time because it is highly expensive and inefficient in terms of computational cost. However, inspired by biological perception systems, it is possible to develop an artificial attention model able to select only the relevant part of the scene, as human vision does. From the Automated Planning point of view, a relevant area can be seen as an area where the objects involved in the execution of a plan are located. Thus, the planning system should guide the attention model to track relevant objects. But, at the same time, the perceived objects may constrain or provide new information that could suggest the modification of a current plan. Therefore, a plan that is being executed should be adapted or recomputed taking into account actual information perceived from the world. In this work, we introduce an architecture that creates a symbiosis between the planning and the attention modules of a robotic system, linking visual features with high level behaviours. The architecture is based on the interaction of an oversubscription planner, that produces plans constrained by the information perceived from the vision system, and an object-based attention system, able to focus on the relevant objects of the plan being executed. {\textcopyright} 2013 IEEE.},
author = {Palomino, Antonio Jesus and Garcia-Olaya, Angel and Fernandez, Fernando and Bandera, Juan Pedro},
booktitle = {2013 European Conference on Mobile Robots, ECMR 2013},
doi = {10.1109/ECMR.2013.6698853},
isbn = {9781479902637},
pages = {268--273},
title = {{From perception to action and vice versa: A new architecture showing how perception and action can modulate each other simultaneously}},
url = {http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2{\&}SrcAuth=ORCID{\&}SrcApp=OrcidOrg{\&}DestLinkType=FullRecord{\&}DestApp=WOS{\_}CPL{\&}KeyUT=WOS:000330234600043{\&}KeyUID=WOS:000330234600043},
year = {2013}
}

Downloads: 0