Modeling the Influence of Action on Spatial Attention in Visual Interactive Environments. Borji, A., Sihite, D. N., & Itti, L. In Proc. IEEE International Conference on Robotics and Automation (ICRA), pages 1-7, May, 2012. abstract bibtex A large number of studies have been reported on top-down influences of visual attention. However, less progress have been made in understanding and modeling its mechanisms in real-world tasks. In this paper, we propose an approach for learning spatial attention taking into account influences of physical actions on top-down attention. For this purpose, we focus on interactive visual environments (video games) which are modest real-world simulations, where a player has to attend to certain aspects of visual stimuli and perform actions to achieve a goal. The basic idea is to learn a mapping from current mental state of the game player, represented by past actions and observations, to its gaze fixation. A data-driven approach is followed where we train a model from the data of some players and test it over a new subject. In particular, two contributions this paper makes are: 1) employing multimodal information including mean eye position, gist of a scene, physical actions, bottom-up saliency, and tagged events for state representation and 2) analysis of different methods of combining bottom-up and top-down influences. Comparing with other top-down task-driven and bottom-up spatio-temporal models, our approach shows higher NSS scores in predicting eye positions.
@inproceedings{ Borji_etal12icra,
author = {A. Borji and D. N. Sihite and L. Itti},
title = {Modeling the Influence of Action on Spatial Attention in Visual Interactive Environments},
booktitle = {Proc. IEEE International Conference on Robotics and Automation (ICRA)},
abstract = {A large number of studies have been reported on top-down
influences of visual attention. However, less
progress have been made in understanding and
modeling its mechanisms in real-world tasks. In this
paper, we propose an approach for learning spatial
attention taking into account influences of physical
actions on top-down attention. For this purpose, we
focus on interactive visual environments (video
games) which are modest real-world simulations,
where a player has to attend to certain aspects of
visual stimuli and perform actions to achieve a
goal. The basic idea is to learn a mapping from
current mental state of the game player, represented
by past actions and observations, to its gaze
fixation. A data-driven approach is followed where
we train a model from the data of some players and
test it over a new subject. In particular, two
contributions this paper makes are: 1) employing
multimodal information including mean eye position,
gist of a scene, physical actions, bottom-up
saliency, and tagged events for state representation
and 2) analysis of different methods of combining
bottom-up and top-down influences. Comparing with
other top-down task-driven and bottom-up
spatio-temporal models, our approach shows higher
NSS scores in predicting eye positions.},
year = {2012},
month = {May},
pages = {1-7},
review = {full/conf},
type = {mod;td;cv},
if = {2012 acceptance rate: 40%},
file = {http://ilab.usc.edu/publications/doc/Borji_etal12icra.pdf}
}
Downloads: 0
{"_id":{"_str":"5298a1a09eb585cc26000870"},"__v":0,"authorIDs":[],"author_short":["Borji, A.","Sihite, D.<nbsp>N.","Itti, L."],"bibbaseid":"borji-sihite-itti-modelingtheinfluenceofactiononspatialattentioninvisualinteractiveenvironments-2012","bibdata":{"html":"<div class=\"bibbase_paper\"> \n\n\n<span class=\"bibbase_paper_titleauthoryear\">\n\t<span class=\"bibbase_paper_title\"><a name=\"Borji_etal12icra\"> </a>Modeling the Influence of Action on Spatial Attention in Visual Interactive Environments.</span>\n\t<span class=\"bibbase_paper_author\">\nBorji, A.; Sihite, D. N.; and Itti, L.</span>\n\t<!-- <span class=\"bibbase_paper_year\">2012</span>. -->\n</span>\n\n\n\nIn\n<i>Proc. IEEE International Conference on Robotics and Automation (ICRA)</i>, page 1-7, May 2012.\n\n\n\n\n\n<br class=\"bibbase_paper_content\"/>\n\n<span class=\"bibbase_paper_content\">\n \n \n \n <a href=\"javascript:showBib('Borji_etal12icra')\"\n class=\"bibbase link\">\n <!-- <img src=\"http://www.bibbase.org/img/filetypes/bib.png\" -->\n\t<!-- alt=\"Modeling the Influence of Action on Spatial Attention in Visual Interactive Environments [bib]\" -->\n\t<!-- class=\"bibbase_icon\" -->\n\t<!-- style=\"width: 24px; height: 24px; border: 0px; vertical-align: text-top\"><span class=\"bibbase_icon_text\">Bibtex</span> -->\n BibTeX\n <i class=\"fa fa-caret-down\"></i></a>\n \n \n \n <a class=\"bibbase_abstract_link bibbase link\"\n href=\"javascript:showAbstract('Borji_etal12icra')\">\n Abstract\n <i class=\"fa fa-caret-down\"></i></a>\n \n \n \n\n \n \n \n</span>\n\n<div class=\"well well-small bibbase\" id=\"bib_Borji_etal12icra\"\n style=\"display:none\">\n <pre>@inproceedings{ Borji_etal12icra,\n author = {A. Borji and D. N. Sihite and L. Itti},\n title = {Modeling the Influence of Action on Spatial Attention in Visual Interactive Environments},\n booktitle = {Proc. IEEE International Conference on Robotics and Automation (ICRA)},\n abstract = {A large number of studies have been reported on top-down\n influences of visual attention. However, less\n progress have been made in understanding and\n modeling its mechanisms in real-world tasks. In this\n paper, we propose an approach for learning spatial\n attention taking into account influences of physical\n actions on top-down attention. For this purpose, we\n focus on interactive visual environments (video\n games) which are modest real-world simulations,\n where a player has to attend to certain aspects of\n visual stimuli and perform actions to achieve a\n goal. The basic idea is to learn a mapping from\n current mental state of the game player, represented\n by past actions and observations, to its gaze\n fixation. A data-driven approach is followed where\n we train a model from the data of some players and\n test it over a new subject. In particular, two\n contributions this paper makes are: 1) employing\n multimodal information including mean eye position,\n gist of a scene, physical actions, bottom-up\n saliency, and tagged events for state representation\n and 2) analysis of different methods of combining\n bottom-up and top-down influences. Comparing with\n other top-down task-driven and bottom-up\n spatio-temporal models, our approach shows higher\n NSS scores in predicting eye positions.},\n year = {2012},\n month = {May},\n pages = {1-7},\n review = {full/conf},\n type = {mod;td;cv},\n if = {2012 acceptance rate: 40%},\n file = {http://ilab.usc.edu/publications/doc/Borji_etal12icra.pdf}\n}</pre>\n</div>\n\n\n<div class=\"well well-small bibbase\" id=\"abstract_Borji_etal12icra\"\n style=\"display:none\">\n A large number of studies have been reported on top-down influences of visual attention. However, less progress have been made in understanding and modeling its mechanisms in real-world tasks. In this paper, we propose an approach for learning spatial attention taking into account influences of physical actions on top-down attention. For this purpose, we focus on interactive visual environments (video games) which are modest real-world simulations, where a player has to attend to certain aspects of visual stimuli and perform actions to achieve a goal. The basic idea is to learn a mapping from current mental state of the game player, represented by past actions and observations, to its gaze fixation. A data-driven approach is followed where we train a model from the data of some players and test it over a new subject. In particular, two contributions this paper makes are: 1) employing multimodal information including mean eye position, gist of a scene, physical actions, bottom-up saliency, and tagged events for state representation and 2) analysis of different methods of combining bottom-up and top-down influences. Comparing with other top-down task-driven and bottom-up spatio-temporal models, our approach shows higher NSS scores in predicting eye positions.\n</div>\n\n\n</div>\n","downloads":0,"bibbaseid":"borji-sihite-itti-modelingtheinfluenceofactiononspatialattentioninvisualinteractiveenvironments-2012","role":"author","year":"2012","type":"mod;td;cv","title":"Modeling the Influence of Action on Spatial Attention in Visual Interactive Environments","review":"full/conf","pages":"1-7","month":"May","key":"Borji_etal12icra","if":"2012 acceptance rate: 40%","id":"Borji_etal12icra","file":"http://ilab.usc.edu/publications/doc/Borji_etal12icra.pdf","booktitle":"Proc. IEEE International Conference on Robotics and Automation (ICRA)","bibtype":"inproceedings","bibtex":"@inproceedings{ Borji_etal12icra,\n author = {A. Borji and D. N. Sihite and L. Itti},\n title = {Modeling the Influence of Action on Spatial Attention in Visual Interactive Environments},\n booktitle = {Proc. IEEE International Conference on Robotics and Automation (ICRA)},\n abstract = {A large number of studies have been reported on top-down\n influences of visual attention. However, less\n progress have been made in understanding and\n modeling its mechanisms in real-world tasks. In this\n paper, we propose an approach for learning spatial\n attention taking into account influences of physical\n actions on top-down attention. For this purpose, we\n focus on interactive visual environments (video\n games) which are modest real-world simulations,\n where a player has to attend to certain aspects of\n visual stimuli and perform actions to achieve a\n goal. The basic idea is to learn a mapping from\n current mental state of the game player, represented\n by past actions and observations, to its gaze\n fixation. A data-driven approach is followed where\n we train a model from the data of some players and\n test it over a new subject. In particular, two\n contributions this paper makes are: 1) employing\n multimodal information including mean eye position,\n gist of a scene, physical actions, bottom-up\n saliency, and tagged events for state representation\n and 2) analysis of different methods of combining\n bottom-up and top-down influences. Comparing with\n other top-down task-driven and bottom-up\n spatio-temporal models, our approach shows higher\n NSS scores in predicting eye positions.},\n year = {2012},\n month = {May},\n pages = {1-7},\n review = {full/conf},\n type = {mod;td;cv},\n if = {2012 acceptance rate: 40%},\n file = {http://ilab.usc.edu/publications/doc/Borji_etal12icra.pdf}\n}","author_short":["Borji, A.","Sihite, D.<nbsp>N.","Itti, L."],"author":["Borji, A.","Sihite, D. N.","Itti, L."],"abstract":"A large number of studies have been reported on top-down influences of visual attention. However, less progress have been made in understanding and modeling its mechanisms in real-world tasks. In this paper, we propose an approach for learning spatial attention taking into account influences of physical actions on top-down attention. For this purpose, we focus on interactive visual environments (video games) which are modest real-world simulations, where a player has to attend to certain aspects of visual stimuli and perform actions to achieve a goal. The basic idea is to learn a mapping from current mental state of the game player, represented by past actions and observations, to its gaze fixation. A data-driven approach is followed where we train a model from the data of some players and test it over a new subject. In particular, two contributions this paper makes are: 1) employing multimodal information including mean eye position, gist of a scene, physical actions, bottom-up saliency, and tagged events for state representation and 2) analysis of different methods of combining bottom-up and top-down influences. Comparing with other top-down task-driven and bottom-up spatio-temporal models, our approach shows higher NSS scores in predicting eye positions."},"bibtype":"inproceedings","biburl":"http://ilab.usc.edu/publications/src/ilab.bib","downloads":0,"search_terms":["modeling","influence","action","spatial","attention","visual","interactive","environments","borji","sihite","itti"],"title":"Modeling the Influence of Action on Spatial Attention in Visual Interactive Environments","year":2012,"dataSources":["wedBDxEpNXNCLZ2sZ"]}