Integrating low-level and high-level visual influences on eye movement behavior. Peters, R. J. & Itti, L. In Proc. Vision Science Society Annual Meeting (VSS07), May, 2007. abstract bibtex We propose a comprehensive computational framework unifying previous qualitative studies of high-level cognitive influences on eye movements with quantitative studies demonstrating the influence of low-level factors such as saliency. In this framework, a top-level "governor" uses high-level task information to determine how best to combine low-level saliency and gist-based task-relevance maps into a single eye-movement priority map. We recorded the eye movements of six trained subjects playing 18 different sessions of first-person perspective video games (car racing, flight combat, and "first-person shooter") and simultaneously recorded the game's video frames, giving about 18 hours of recording for 15,000,000 eye movement samples (240Hz) and 1.1TB of video data (640x480 pixels at 30Hz). We then computed measures of how well the individual saliency and task-relevance maps predicted observers' eye positions in each frame, and probed for the role of the governor in relationships between high-level task information -- such as altimeter and damage meter settings, or the presence/absence of a target -- and the predictive strength of the maps. One such relationship occurred in the flight combat game. In this game, observers are actively task-driven while tracking enemy planes, ignoring bottom-up saliency in favor of task-relevant items like the radar screen; then, after firing a missile, observers become passively stimulus-driven while awaiting visual confirmation of the missile hit. We confirmed this quantitatively by analyzing the correspondence between saliency and eye position across a window of +/-10s relative to the time of 328 such missile hits. Around -200ms (before the hit), the saliency correspondence begins to rise, reaching a peak at +100ms (after the hit) of 10-fold above the previous baseline, then is suppressed below baseline at +800ms, and rebounds back to baseline at +2000ms. Thus, one mechanism by which high-level cognitive information can influence eye movements is through dynamically weighting competing saliency and task-relevance maps.
@inproceedings{ Peters_Itti07vss,
author = {R. J. Peters and L. Itti},
title = {Integrating low-level and high-level visual influences on eye
movement behavior},
abstract = {We propose a comprehensive computational framework unifying
previous qualitative studies of high-level cognitive
influences on eye movements with quantitative
studies demonstrating the influence of low-level
factors such as saliency. In this framework, a
top-level "governor" uses high-level task
information to determine how best to combine
low-level saliency and gist-based task-relevance
maps into a single eye-movement priority map. We
recorded the eye movements of six trained subjects
playing 18 different sessions of first-person
perspective video games (car racing, flight combat,
and "first-person shooter") and simultaneously
recorded the game's video frames, giving about 18
hours of recording for 15,000,000 eye movement
samples (240Hz) and 1.1TB of video data (640x480
pixels at 30Hz). We then computed measures of how
well the individual saliency and task-relevance maps
predicted observers' eye positions in each frame,
and probed for the role of the governor in
relationships between high-level task information --
such as altimeter and damage meter settings, or the
presence/absence of a target -- and the predictive
strength of the maps. One such relationship
occurred in the flight combat game. In this game,
observers are actively task-driven while tracking
enemy planes, ignoring bottom-up saliency in favor
of task-relevant items like the radar screen; then,
after firing a missile, observers become passively
stimulus-driven while awaiting visual confirmation
of the missile hit. We confirmed this quantitatively
by analyzing the correspondence between saliency and
eye position across a window of +/-10s relative to
the time of 328 such missile hits. Around -200ms
(before the hit), the saliency correspondence begins
to rise, reaching a peak at +100ms (after the hit)
of 10-fold above the previous baseline, then is
suppressed below baseline at +800ms, and rebounds
back to baseline at +2000ms. Thus, one mechanism by
which high-level cognitive information can influence
eye movements is through dynamically weighting
competing saliency and task-relevance maps.},
booktitle = {Proc. Vision Science Society Annual Meeting (VSS07)},
year = {2007},
month = {May},
type = {mod;bu;td;eye},
review = {abs/conf}
}
Downloads: 0
{"_id":{"_str":"5298a1a19eb585cc260008e5"},"__v":0,"authorIDs":[],"author_short":["Peters, R.<nbsp>J.","Itti, L."],"bibbaseid":"peters-itti-integratinglowlevelandhighlevelvisualinfluencesoneyemovementbehavior-2007","bibdata":{"html":"<div class=\"bibbase_paper\"> \n\n\n<span class=\"bibbase_paper_titleauthoryear\">\n\t<span class=\"bibbase_paper_title\"><a name=\"Peters_Itti07vss\"> </a>Integrating low-level and high-level visual influences on eye movement behavior.</span>\n\t<span class=\"bibbase_paper_author\">\nPeters, R. J.; and Itti, L.</span>\n\t<!-- <span class=\"bibbase_paper_year\">2007</span>. -->\n</span>\n\n\n\nIn\n<i>Proc. Vision Science Society Annual Meeting (VSS07)</i>, May 2007.\n\n\n\n\n\n<br class=\"bibbase_paper_content\"/>\n\n<span class=\"bibbase_paper_content\">\n \n \n \n <a href=\"javascript:showBib('Peters_Itti07vss')\"\n class=\"bibbase link\">\n <!-- <img src=\"http://www.bibbase.org/img/filetypes/bib.png\" -->\n\t<!-- alt=\"Integrating low-level and high-level visual influences on eye movement behavior [bib]\" -->\n\t<!-- class=\"bibbase_icon\" -->\n\t<!-- style=\"width: 24px; height: 24px; border: 0px; vertical-align: text-top\"><span class=\"bibbase_icon_text\">Bibtex</span> -->\n BibTeX\n <i class=\"fa fa-caret-down\"></i></a>\n \n \n \n <a class=\"bibbase_abstract_link bibbase link\"\n href=\"javascript:showAbstract('Peters_Itti07vss')\">\n Abstract\n <i class=\"fa fa-caret-down\"></i></a>\n \n \n \n\n \n \n \n</span>\n\n<div class=\"well well-small bibbase\" id=\"bib_Peters_Itti07vss\"\n style=\"display:none\">\n <pre>@inproceedings{ Peters_Itti07vss,\n author = {R. J. Peters and L. Itti},\n title = {Integrating low-level and high-level visual influences on eye\n movement behavior},\n abstract = {We propose a comprehensive computational framework unifying\n previous qualitative studies of high-level cognitive\n influences on eye movements with quantitative\n studies demonstrating the influence of low-level\n factors such as saliency. In this framework, a\n top-level \"governor\" uses high-level task\n information to determine how best to combine\n low-level saliency and gist-based task-relevance\n maps into a single eye-movement priority map. We\n recorded the eye movements of six trained subjects\n playing 18 different sessions of first-person\n perspective video games (car racing, flight combat,\n and \"first-person shooter\") and simultaneously\n recorded the game's video frames, giving about 18\n hours of recording for 15,000,000 eye movement\n samples (240Hz) and 1.1TB of video data (640x480\n pixels at 30Hz). We then computed measures of how\n well the individual saliency and task-relevance maps\n predicted observers' eye positions in each frame,\n and probed for the role of the governor in\n relationships between high-level task information --\n such as altimeter and damage meter settings, or the\n presence/absence of a target -- and the predictive\n strength of the maps. One such relationship\n occurred in the flight combat game. In this game,\n observers are actively task-driven while tracking\n enemy planes, ignoring bottom-up saliency in favor\n of task-relevant items like the radar screen; then,\n after firing a missile, observers become passively\n stimulus-driven while awaiting visual confirmation\n of the missile hit. We confirmed this quantitatively\n by analyzing the correspondence between saliency and\n eye position across a window of +/-10s relative to\n the time of 328 such missile hits. Around -200ms\n (before the hit), the saliency correspondence begins\n to rise, reaching a peak at +100ms (after the hit)\n of 10-fold above the previous baseline, then is\n suppressed below baseline at +800ms, and rebounds\n back to baseline at +2000ms. Thus, one mechanism by\n which high-level cognitive information can influence\n eye movements is through dynamically weighting\n competing saliency and task-relevance maps.},\n booktitle = {Proc. Vision Science Society Annual Meeting (VSS07)},\n year = {2007},\n month = {May},\n type = {mod;bu;td;eye},\n review = {abs/conf}\n}</pre>\n</div>\n\n\n<div class=\"well well-small bibbase\" id=\"abstract_Peters_Itti07vss\"\n style=\"display:none\">\n We propose a comprehensive computational framework unifying previous qualitative studies of high-level cognitive influences on eye movements with quantitative studies demonstrating the influence of low-level factors such as saliency. In this framework, a top-level \"governor\" uses high-level task information to determine how best to combine low-level saliency and gist-based task-relevance maps into a single eye-movement priority map. We recorded the eye movements of six trained subjects playing 18 different sessions of first-person perspective video games (car racing, flight combat, and \"first-person shooter\") and simultaneously recorded the game's video frames, giving about 18 hours of recording for 15,000,000 eye movement samples (240Hz) and 1.1TB of video data (640x480 pixels at 30Hz). We then computed measures of how well the individual saliency and task-relevance maps predicted observers' eye positions in each frame, and probed for the role of the governor in relationships between high-level task information -- such as altimeter and damage meter settings, or the presence/absence of a target -- and the predictive strength of the maps. One such relationship occurred in the flight combat game. In this game, observers are actively task-driven while tracking enemy planes, ignoring bottom-up saliency in favor of task-relevant items like the radar screen; then, after firing a missile, observers become passively stimulus-driven while awaiting visual confirmation of the missile hit. We confirmed this quantitatively by analyzing the correspondence between saliency and eye position across a window of +/-10s relative to the time of 328 such missile hits. Around -200ms (before the hit), the saliency correspondence begins to rise, reaching a peak at +100ms (after the hit) of 10-fold above the previous baseline, then is suppressed below baseline at +800ms, and rebounds back to baseline at +2000ms. Thus, one mechanism by which high-level cognitive information can influence eye movements is through dynamically weighting competing saliency and task-relevance maps.\n</div>\n\n\n</div>\n","downloads":0,"bibbaseid":"peters-itti-integratinglowlevelandhighlevelvisualinfluencesoneyemovementbehavior-2007","role":"author","year":"2007","type":"mod;bu;td;eye","title":"Integrating low-level and high-level visual influences on eye movement behavior","review":"abs/conf","month":"May","key":"Peters_Itti07vss","id":"Peters_Itti07vss","booktitle":"Proc. Vision Science Society Annual Meeting (VSS07)","bibtype":"inproceedings","bibtex":"@inproceedings{ Peters_Itti07vss,\n author = {R. J. Peters and L. Itti},\n title = {Integrating low-level and high-level visual influences on eye\n movement behavior},\n abstract = {We propose a comprehensive computational framework unifying\n previous qualitative studies of high-level cognitive\n influences on eye movements with quantitative\n studies demonstrating the influence of low-level\n factors such as saliency. In this framework, a\n top-level \"governor\" uses high-level task\n information to determine how best to combine\n low-level saliency and gist-based task-relevance\n maps into a single eye-movement priority map. We\n recorded the eye movements of six trained subjects\n playing 18 different sessions of first-person\n perspective video games (car racing, flight combat,\n and \"first-person shooter\") and simultaneously\n recorded the game's video frames, giving about 18\n hours of recording for 15,000,000 eye movement\n samples (240Hz) and 1.1TB of video data (640x480\n pixels at 30Hz). We then computed measures of how\n well the individual saliency and task-relevance maps\n predicted observers' eye positions in each frame,\n and probed for the role of the governor in\n relationships between high-level task information --\n such as altimeter and damage meter settings, or the\n presence/absence of a target -- and the predictive\n strength of the maps. One such relationship\n occurred in the flight combat game. In this game,\n observers are actively task-driven while tracking\n enemy planes, ignoring bottom-up saliency in favor\n of task-relevant items like the radar screen; then,\n after firing a missile, observers become passively\n stimulus-driven while awaiting visual confirmation\n of the missile hit. We confirmed this quantitatively\n by analyzing the correspondence between saliency and\n eye position across a window of +/-10s relative to\n the time of 328 such missile hits. Around -200ms\n (before the hit), the saliency correspondence begins\n to rise, reaching a peak at +100ms (after the hit)\n of 10-fold above the previous baseline, then is\n suppressed below baseline at +800ms, and rebounds\n back to baseline at +2000ms. Thus, one mechanism by\n which high-level cognitive information can influence\n eye movements is through dynamically weighting\n competing saliency and task-relevance maps.},\n booktitle = {Proc. Vision Science Society Annual Meeting (VSS07)},\n year = {2007},\n month = {May},\n type = {mod;bu;td;eye},\n review = {abs/conf}\n}","author_short":["Peters, R.<nbsp>J.","Itti, L."],"author":["Peters, R. J.","Itti, L."],"abstract":"We propose a comprehensive computational framework unifying previous qualitative studies of high-level cognitive influences on eye movements with quantitative studies demonstrating the influence of low-level factors such as saliency. In this framework, a top-level \"governor\" uses high-level task information to determine how best to combine low-level saliency and gist-based task-relevance maps into a single eye-movement priority map. We recorded the eye movements of six trained subjects playing 18 different sessions of first-person perspective video games (car racing, flight combat, and \"first-person shooter\") and simultaneously recorded the game's video frames, giving about 18 hours of recording for 15,000,000 eye movement samples (240Hz) and 1.1TB of video data (640x480 pixels at 30Hz). We then computed measures of how well the individual saliency and task-relevance maps predicted observers' eye positions in each frame, and probed for the role of the governor in relationships between high-level task information -- such as altimeter and damage meter settings, or the presence/absence of a target -- and the predictive strength of the maps. One such relationship occurred in the flight combat game. In this game, observers are actively task-driven while tracking enemy planes, ignoring bottom-up saliency in favor of task-relevant items like the radar screen; then, after firing a missile, observers become passively stimulus-driven while awaiting visual confirmation of the missile hit. We confirmed this quantitatively by analyzing the correspondence between saliency and eye position across a window of +/-10s relative to the time of 328 such missile hits. Around -200ms (before the hit), the saliency correspondence begins to rise, reaching a peak at +100ms (after the hit) of 10-fold above the previous baseline, then is suppressed below baseline at +800ms, and rebounds back to baseline at +2000ms. Thus, one mechanism by which high-level cognitive information can influence eye movements is through dynamically weighting competing saliency and task-relevance maps."},"bibtype":"inproceedings","biburl":"http://ilab.usc.edu/publications/src/ilab.bib","downloads":0,"search_terms":["integrating","low","level","high","level","visual","influences","eye","movement","behavior","peters","itti"],"title":"Integrating low-level and high-level visual influences on eye movement behavior","year":2007,"dataSources":["wedBDxEpNXNCLZ2sZ"]}