Probabilistic Learning of Task-Specific Visual Attention. Borji, A., Sihite, D. N., & Itti, L. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, Rhode Island, pages 1-8, Jun, 2012. abstract bibtex Despite a considerable amount of previous work on bottom-up saliency modeling for predicting human fixations over static and dynamic stimuli, few studies have thus far attempted to model top-down and task-driven influences of visual attention. Here, taking advantage of the sequential nature of real-world tasks, we propose a unified Bayesian approach for modeling task-driven visual attention. Several sources of information, including global context of a scene, previous attended locations, and previous motor actions, are integrated over time to predict the next attended location. Recording eye movements while subjects engage in 5 contemporary 2D and 3D video games, as modest counterparts of everyday tasks, we show that our approach is able to predict human attention and gaze better than the state-of-the-art, with a large margin (about 15 percent increase in prediction accuracy). The advantage of our approach is that it is automatic and applicable to arbitrary visual tasks.
@inproceedings{ Borji_etal12cvpr,
author = {A. Borji and D. N. Sihite and L. Itti},
title = {Probabilistic Learning of Task-Specific Visual Attention},
booktitle = {Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, Rhode Island},
abstract = {Despite a considerable amount of previous work on bottom-up saliency modeling for predicting human fixations
over static and dynamic stimuli, few studies have thus far attempted to model top-down and task-driven
influences of visual attention. Here, taking advantage of the sequential nature of real-world tasks,
we propose a unified Bayesian approach for modeling task-driven visual attention. Several sources of
information, including global context of a scene, previous attended locations, and previous motor
actions, are integrated over time to predict the next attended location. Recording eye movements while
subjects engage in 5 contemporary 2D and 3D video games, as modest counterparts of everyday tasks, we
show that our approach is able to predict human attention and gaze better than the state-of-the-art,
with a large margin (about 15 percent increase in prediction accuracy). The advantage of our approach
is that it is automatic and applicable to arbitrary visual tasks.},
month = {Jun},
pages = {1-8},
year = {2012},
review = {full/conf},
type = {bu;td;mod;cv},
if = {2012 acceptance rate: 26.2%},
file = {http://ilab.usc.edu/publications/doc/Borji_etal12cvpr.pdf}
}
Downloads: 0
{"_id":{"_str":"5298a1a19eb585cc260008c7"},"__v":0,"authorIDs":[],"author_short":["Borji, A.","Sihite, D.<nbsp>N.","Itti, L."],"bibbaseid":"borji-sihite-itti-probabilisticlearningoftaskspecificvisualattention-2012","bibdata":{"html":"<div class=\"bibbase_paper\"> \n\n\n<span class=\"bibbase_paper_titleauthoryear\">\n\t<span class=\"bibbase_paper_title\"><a name=\"Borji_etal12cvpr\"> </a>Probabilistic Learning of Task-Specific Visual Attention.</span>\n\t<span class=\"bibbase_paper_author\">\nBorji, A.; Sihite, D. N.; and Itti, L.</span>\n\t<!-- <span class=\"bibbase_paper_year\">2012</span>. -->\n</span>\n\n\n\nIn\n<i>Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, Rhode Island</i>, page 1-8, Jun 2012.\n\n\n\n\n\n<br class=\"bibbase_paper_content\"/>\n\n<span class=\"bibbase_paper_content\">\n \n \n \n <a href=\"javascript:showBib('Borji_etal12cvpr')\"\n class=\"bibbase link\">\n <!-- <img src=\"http://www.bibbase.org/img/filetypes/bib.png\" -->\n\t<!-- alt=\"Probabilistic Learning of Task-Specific Visual Attention [bib]\" -->\n\t<!-- class=\"bibbase_icon\" -->\n\t<!-- style=\"width: 24px; height: 24px; border: 0px; vertical-align: text-top\"><span class=\"bibbase_icon_text\">Bibtex</span> -->\n BibTeX\n <i class=\"fa fa-caret-down\"></i></a>\n \n \n \n <a class=\"bibbase_abstract_link bibbase link\"\n href=\"javascript:showAbstract('Borji_etal12cvpr')\">\n Abstract\n <i class=\"fa fa-caret-down\"></i></a>\n \n \n \n\n \n \n \n</span>\n\n<div class=\"well well-small bibbase\" id=\"bib_Borji_etal12cvpr\"\n style=\"display:none\">\n <pre>@inproceedings{ Borji_etal12cvpr,\n author = {A. Borji and D. N. Sihite and L. Itti},\n title = {Probabilistic Learning of Task-Specific Visual Attention},\n booktitle = {Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, Rhode Island},\n abstract = {Despite a considerable amount of previous work on bottom-up saliency modeling for predicting human fixations\n over static and dynamic stimuli, few studies have thus far attempted to model top-down and task-driven\n influences of visual attention. Here, taking advantage of the sequential nature of real-world tasks,\n we propose a unified Bayesian approach for modeling task-driven visual attention. Several sources of\n information, including global context of a scene, previous attended locations, and previous motor\n actions, are integrated over time to predict the next attended location. Recording eye movements while\n subjects engage in 5 contemporary 2D and 3D video games, as modest counterparts of everyday tasks, we\n show that our approach is able to predict human attention and gaze better than the state-of-the-art,\n with a large margin (about 15 percent increase in prediction accuracy). The advantage of our approach\n is that it is automatic and applicable to arbitrary visual tasks.},\n month = {Jun},\n pages = {1-8},\n year = {2012},\n review = {full/conf},\n type = {bu;td;mod;cv},\n if = {2012 acceptance rate: 26.2%},\n file = {http://ilab.usc.edu/publications/doc/Borji_etal12cvpr.pdf}\n}</pre>\n</div>\n\n\n<div class=\"well well-small bibbase\" id=\"abstract_Borji_etal12cvpr\"\n style=\"display:none\">\n Despite a considerable amount of previous work on bottom-up saliency modeling for predicting human fixations over static and dynamic stimuli, few studies have thus far attempted to model top-down and task-driven influences of visual attention. Here, taking advantage of the sequential nature of real-world tasks, we propose a unified Bayesian approach for modeling task-driven visual attention. Several sources of information, including global context of a scene, previous attended locations, and previous motor actions, are integrated over time to predict the next attended location. Recording eye movements while subjects engage in 5 contemporary 2D and 3D video games, as modest counterparts of everyday tasks, we show that our approach is able to predict human attention and gaze better than the state-of-the-art, with a large margin (about 15 percent increase in prediction accuracy). The advantage of our approach is that it is automatic and applicable to arbitrary visual tasks.\n</div>\n\n\n</div>\n","downloads":0,"bibbaseid":"borji-sihite-itti-probabilisticlearningoftaskspecificvisualattention-2012","role":"author","year":"2012","type":"bu;td;mod;cv","title":"Probabilistic Learning of Task-Specific Visual Attention","review":"full/conf","pages":"1-8","month":"Jun","key":"Borji_etal12cvpr","if":"2012 acceptance rate: 26.2%","id":"Borji_etal12cvpr","file":"http://ilab.usc.edu/publications/doc/Borji_etal12cvpr.pdf","booktitle":"Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, Rhode Island","bibtype":"inproceedings","bibtex":"@inproceedings{ Borji_etal12cvpr,\n author = {A. Borji and D. N. Sihite and L. Itti},\n title = {Probabilistic Learning of Task-Specific Visual Attention},\n booktitle = {Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, Rhode Island},\n abstract = {Despite a considerable amount of previous work on bottom-up saliency modeling for predicting human fixations\n over static and dynamic stimuli, few studies have thus far attempted to model top-down and task-driven\n influences of visual attention. Here, taking advantage of the sequential nature of real-world tasks,\n we propose a unified Bayesian approach for modeling task-driven visual attention. Several sources of\n information, including global context of a scene, previous attended locations, and previous motor\n actions, are integrated over time to predict the next attended location. Recording eye movements while\n subjects engage in 5 contemporary 2D and 3D video games, as modest counterparts of everyday tasks, we\n show that our approach is able to predict human attention and gaze better than the state-of-the-art,\n with a large margin (about 15 percent increase in prediction accuracy). The advantage of our approach\n is that it is automatic and applicable to arbitrary visual tasks.},\n month = {Jun},\n pages = {1-8},\n year = {2012},\n review = {full/conf},\n type = {bu;td;mod;cv},\n if = {2012 acceptance rate: 26.2%},\n file = {http://ilab.usc.edu/publications/doc/Borji_etal12cvpr.pdf}\n}","author_short":["Borji, A.","Sihite, D.<nbsp>N.","Itti, L."],"author":["Borji, A.","Sihite, D. N.","Itti, L."],"abstract":"Despite a considerable amount of previous work on bottom-up saliency modeling for predicting human fixations over static and dynamic stimuli, few studies have thus far attempted to model top-down and task-driven influences of visual attention. Here, taking advantage of the sequential nature of real-world tasks, we propose a unified Bayesian approach for modeling task-driven visual attention. Several sources of information, including global context of a scene, previous attended locations, and previous motor actions, are integrated over time to predict the next attended location. Recording eye movements while subjects engage in 5 contemporary 2D and 3D video games, as modest counterparts of everyday tasks, we show that our approach is able to predict human attention and gaze better than the state-of-the-art, with a large margin (about 15 percent increase in prediction accuracy). The advantage of our approach is that it is automatic and applicable to arbitrary visual tasks."},"bibtype":"inproceedings","biburl":"http://ilab.usc.edu/publications/src/ilab.bib","downloads":0,"search_terms":["probabilistic","learning","task","specific","visual","attention","borji","sihite","itti"],"title":"Probabilistic Learning of Task-Specific Visual Attention","year":2012,"dataSources":["wedBDxEpNXNCLZ2sZ"]}