{"_id":{"_str":"5298a1a19eb585cc2600092e"},"__v":0,"authorIDs":[],"author_short":["Tseng, P.","Cameron, I.<nbsp>G.<nbsp>M.","Munoz, D.<nbsp>P.","Itti, L."],"bibbaseid":"tseng-cameron-munoz-itti-screeningattentionalrelateddiseasesbasedoncorrelationbetweensalienceandgaze-2009","bibdata":{"html":"<div class=\"bibbase_paper\"> \n\n\n<span class=\"bibbase_paper_titleauthoryear\">\n\t<span class=\"bibbase_paper_title\"><a name=\"Tseng_etal09vss\"> </a>Screening Attentional-related Diseases based on Correlation between Salience and Gaze.</span>\n\t<span class=\"bibbase_paper_author\">\nTseng, P.; Cameron, I. G.<nbsp>M.; Munoz, D. P.; and Itti, L.</span>\n\t<!-- <span class=\"bibbase_paper_year\">2009</span>. -->\n</span>\n\n\n\nIn\n<i>Proc. Vision Science Society Annual Meeting (VSS09)</i>, May 2009.\n\n\n\n\n\n<br class=\"bibbase_paper_content\"/>\n\n<span class=\"bibbase_paper_content\">\n \n \n \n <a href=\"javascript:showBib('Tseng_etal09vss')\"\n class=\"bibbase link\">\n <!-- <img src=\"http://www.bibbase.org/img/filetypes/bib.png\" -->\n\t<!-- alt=\"Screening Attentional-related Diseases based on Correlation between Salience and Gaze [bib]\" -->\n\t<!-- class=\"bibbase_icon\" -->\n\t<!-- style=\"width: 24px; height: 24px; border: 0px; vertical-align: text-top\"><span class=\"bibbase_icon_text\">Bibtex</span> -->\n BibTeX\n <i class=\"fa fa-caret-down\"></i></a>\n \n \n \n <a class=\"bibbase_abstract_link bibbase link\"\n href=\"javascript:showAbstract('Tseng_etal09vss')\">\n Abstract\n <i class=\"fa fa-caret-down\"></i></a>\n \n \n \n\n \n \n \n</span>\n\n<div class=\"well well-small bibbase\" id=\"bib_Tseng_etal09vss\"\n style=\"display:none\">\n <pre>@inproceedings{ Tseng_etal09vss,\n author = {P. Tseng and I. G. M. Cameron and D. P. Munoz and L. Itti},\n title = {Screening Attentional-related Diseases based on Correlation\n between Salience and Gaze},\n abstract = {Several studies have shown that eye movements and certain\n complex visual functions are influenced by diseases\n such as Parkinson's Disease (PD) and Attention\n Deficit Hyperactivity Disorder (ADHD). Here we\n examine how bottom-up (stimulus-driven) attentional\n selection mechanisms may differ between patient and\n control populations, and we take advantage of the\n difference to develop classifiers to differentiate\n patients from controls. We tracked gaze of four\n groups of observers (15 control children, aged 7-14;\n 6 ADHD children, aged 9-15; 12 control elderly, aged\n 66-79; and 9 PD elderly, aged 53-68) while they\n freely viewed MTV-style videos. These stimuli are\n composed of short (2-4 seconds), unrelated clips of\n natural scenes to reduce top-down (contextual)\n expectations and emphasize bottom-up influences on\n gaze allocations at the scene change. We used a\n saliency model to compute bottom-up saliency maps\n for every video frame. Saliency maps can be computed\n from a full set of features (color, intensity,\n orientation, flicker, motion) or from individual\n features. Support-vector-machine classifiers (with\n Radial-Basis Function Kernel) were built for each\n feature contributing the saliency map and for the\n combination of them. Leave-one-out was used to train\n and test the classifiers. Two classification\n experiments were performed: (1) between ADHD and\n control children; (2) between PD and control\n elderly. Saliency maps computed with all features\n can well differentiate patients and control\n populations (correctness: experiment 1 - 100%;\n experiment 2 - 95.24%). Additionally, saliency maps\n computed from any one feature performed nearly as\n well (both experiments' results are 0-5%\n worse). Moreover, 0-250 ms after scene change is the\n most discriminative period for the\n classification. This study demonstrates that the\n bottom-up mechanism is greatly influenced by PD and\n ADHD, and the difference can serve as a probable\n diagnosis tool for clinical applications. },\n booktitle = {Proc. Vision Science Society Annual Meeting (VSS09)},\n year = {2009},\n month = {May},\n type = {bu;td;mod;psy;med},\n review = {abs/conf}\n}</pre>\n</div>\n\n\n<div class=\"well well-small bibbase\" id=\"abstract_Tseng_etal09vss\"\n style=\"display:none\">\n Several studies have shown that eye movements and certain complex visual functions are influenced by diseases such as Parkinson's Disease (PD) and Attention Deficit Hyperactivity Disorder (ADHD). Here we examine how bottom-up (stimulus-driven) attentional selection mechanisms may differ between patient and control populations, and we take advantage of the difference to develop classifiers to differentiate patients from controls. We tracked gaze of four groups of observers (15 control children, aged 7-14; 6 ADHD children, aged 9-15; 12 control elderly, aged 66-79; and 9 PD elderly, aged 53-68) while they freely viewed MTV-style videos. These stimuli are composed of short (2-4 seconds), unrelated clips of natural scenes to reduce top-down (contextual) expectations and emphasize bottom-up influences on gaze allocations at the scene change. We used a saliency model to compute bottom-up saliency maps for every video frame. Saliency maps can be computed from a full set of features (color, intensity, orientation, flicker, motion) or from individual features. Support-vector-machine classifiers (with Radial-Basis Function Kernel) were built for each feature contributing the saliency map and for the combination of them. Leave-one-out was used to train and test the classifiers. Two classification experiments were performed: (1) between ADHD and control children; (2) between PD and control elderly. Saliency maps computed with all features can well differentiate patients and control populations (correctness: experiment 1 - 100%; experiment 2 - 95.24%). Additionally, saliency maps computed from any one feature performed nearly as well (both experiments' results are 0-5% worse). Moreover, 0-250 ms after scene change is the most discriminative period for the classification. This study demonstrates that the bottom-up mechanism is greatly influenced by PD and ADHD, and the difference can serve as a probable diagnosis tool for clinical applications.\n</div>\n\n\n</div>\n","downloads":0,"bibbaseid":"tseng-cameron-munoz-itti-screeningattentionalrelateddiseasesbasedoncorrelationbetweensalienceandgaze-2009","role":"author","year":"2009","type":"bu;td;mod;psy;med","title":"Screening Attentional-related Diseases based on Correlation between Salience and Gaze","review":"abs/conf","month":"May","key":"Tseng_etal09vss","id":"Tseng_etal09vss","booktitle":"Proc. Vision Science Society Annual Meeting (VSS09)","bibtype":"inproceedings","bibtex":"@inproceedings{ Tseng_etal09vss,\n author = {P. Tseng and I. G. M. Cameron and D. P. Munoz and L. Itti},\n title = {Screening Attentional-related Diseases based on Correlation\n between Salience and Gaze},\n abstract = {Several studies have shown that eye movements and certain\n complex visual functions are influenced by diseases\n such as Parkinson's Disease (PD) and Attention\n Deficit Hyperactivity Disorder (ADHD). Here we\n examine how bottom-up (stimulus-driven) attentional\n selection mechanisms may differ between patient and\n control populations, and we take advantage of the\n difference to develop classifiers to differentiate\n patients from controls. We tracked gaze of four\n groups of observers (15 control children, aged 7-14;\n 6 ADHD children, aged 9-15; 12 control elderly, aged\n 66-79; and 9 PD elderly, aged 53-68) while they\n freely viewed MTV-style videos. These stimuli are\n composed of short (2-4 seconds), unrelated clips of\n natural scenes to reduce top-down (contextual)\n expectations and emphasize bottom-up influences on\n gaze allocations at the scene change. We used a\n saliency model to compute bottom-up saliency maps\n for every video frame. Saliency maps can be computed\n from a full set of features (color, intensity,\n orientation, flicker, motion) or from individual\n features. Support-vector-machine classifiers (with\n Radial-Basis Function Kernel) were built for each\n feature contributing the saliency map and for the\n combination of them. Leave-one-out was used to train\n and test the classifiers. Two classification\n experiments were performed: (1) between ADHD and\n control children; (2) between PD and control\n elderly. Saliency maps computed with all features\n can well differentiate patients and control\n populations (correctness: experiment 1 - 100%;\n experiment 2 - 95.24%). Additionally, saliency maps\n computed from any one feature performed nearly as\n well (both experiments' results are 0-5%\n worse). Moreover, 0-250 ms after scene change is the\n most discriminative period for the\n classification. This study demonstrates that the\n bottom-up mechanism is greatly influenced by PD and\n ADHD, and the difference can serve as a probable\n diagnosis tool for clinical applications. },\n booktitle = {Proc. Vision Science Society Annual Meeting (VSS09)},\n year = {2009},\n month = {May},\n type = {bu;td;mod;psy;med},\n review = {abs/conf}\n}","author_short":["Tseng, P.","Cameron, I.<nbsp>G.<nbsp>M.","Munoz, D.<nbsp>P.","Itti, L."],"author":["Tseng, P.","Cameron, I. G. M.","Munoz, D. P.","Itti, L."],"abstract":"Several studies have shown that eye movements and certain complex visual functions are influenced by diseases such as Parkinson's Disease (PD) and Attention Deficit Hyperactivity Disorder (ADHD). Here we examine how bottom-up (stimulus-driven) attentional selection mechanisms may differ between patient and control populations, and we take advantage of the difference to develop classifiers to differentiate patients from controls. We tracked gaze of four groups of observers (15 control children, aged 7-14; 6 ADHD children, aged 9-15; 12 control elderly, aged 66-79; and 9 PD elderly, aged 53-68) while they freely viewed MTV-style videos. These stimuli are composed of short (2-4 seconds), unrelated clips of natural scenes to reduce top-down (contextual) expectations and emphasize bottom-up influences on gaze allocations at the scene change. We used a saliency model to compute bottom-up saliency maps for every video frame. Saliency maps can be computed from a full set of features (color, intensity, orientation, flicker, motion) or from individual features. Support-vector-machine classifiers (with Radial-Basis Function Kernel) were built for each feature contributing the saliency map and for the combination of them. Leave-one-out was used to train and test the classifiers. Two classification experiments were performed: (1) between ADHD and control children; (2) between PD and control elderly. Saliency maps computed with all features can well differentiate patients and control populations (correctness: experiment 1 - 100%; experiment 2 - 95.24%). Additionally, saliency maps computed from any one feature performed nearly as well (both experiments' results are 0-5% worse). Moreover, 0-250 ms after scene change is the most discriminative period for the classification. This study demonstrates that the bottom-up mechanism is greatly influenced by PD and ADHD, and the difference can serve as a probable diagnosis tool for clinical applications."},"bibtype":"inproceedings","biburl":"http://ilab.usc.edu/publications/src/ilab.bib","downloads":0,"search_terms":["screening","attentional","related","diseases","based","correlation","between","salience","gaze","tseng","cameron","munoz","itti"],"title":"Screening Attentional-related Diseases based on Correlation between Salience and Gaze","year":2009,"dataSources":["wedBDxEpNXNCLZ2sZ"]}