{"_id":{"_str":"5298a1a09eb585cc2600083b"},"__v":0,"authorIDs":[],"author_short":["Bonaiuto, J.","Itti, L."],"bibbaseid":"bonaiuto-itti-theuseofattentionandspatialinformationforrapidfacialrecognitioninvideo-2006","bibdata":{"html":"<div class=\"bibbase_paper\"> \n\n\n<span class=\"bibbase_paper_titleauthoryear\">\n\t<span class=\"bibbase_paper_title\"><a name=\"Bonaiuto_Itti06ivc\"> </a>The Use of Attention and Spatial Information for Rapid Facial Recognition in Video.</span>\n\t<span class=\"bibbase_paper_author\">\nBonaiuto, J.; and Itti, L.</span>\n\t<!-- <span class=\"bibbase_paper_year\">2006</span>. -->\n</span>\n\n\n\n<i>Image and Vision Computing</i>,\n\n24(6):557-563.\n\nJun 2006.\n\n\n\n\n<br class=\"bibbase_paper_content\"/>\n\n<span class=\"bibbase_paper_content\">\n \n \n \n <a href=\"javascript:showBib('Bonaiuto_Itti06ivc')\"\n class=\"bibbase link\">\n <!-- <img src=\"http://www.bibbase.org/img/filetypes/bib.png\" -->\n\t<!-- alt=\"The Use of Attention and Spatial Information for Rapid Facial Recognition in Video [bib]\" -->\n\t<!-- class=\"bibbase_icon\" -->\n\t<!-- style=\"width: 24px; height: 24px; border: 0px; vertical-align: text-top\"><span class=\"bibbase_icon_text\">Bibtex</span> -->\n BibTeX\n <i class=\"fa fa-caret-down\"></i></a>\n \n \n \n <a class=\"bibbase_abstract_link bibbase link\"\n href=\"javascript:showAbstract('Bonaiuto_Itti06ivc')\">\n Abstract\n <i class=\"fa fa-caret-down\"></i></a>\n \n \n \n\n \n \n \n</span>\n\n<div class=\"well well-small bibbase\" id=\"bib_Bonaiuto_Itti06ivc\"\n style=\"display:none\">\n <pre>@article{ Bonaiuto_Itti06ivc,\n author = {J. Bonaiuto and L. Itti},\n title = {The Use of Attention and Spatial Information for Rapid Facial Recognition in Video},\n abstract = {Bottom-up visual attention is the process by which primates\nquickly select regions of an image likely to contain behaviorally\nrelevant objects. In artificial systems, restricting the task of\nobject recognition to these regions allows faster recognition and\nunsupervised learning of multiple objects in cluttered scenes. A\nproblem with this approach is that often objects that are\nsuperficially dissimilar to the target are given the same\nconsideration in recognition as similar objects. Additionally, in\nvideo, objects recognized in previous frames at locations distant to\nthe current fixation point often are given the same consideration in\nrecognition as objects previously recognized at proximal locations.\nHere we investigate the value of rapidly pruning the facial\nrecognition search space, first using similarity in the\nalready-computed low-level features that guide attention to prioritize\nmatching against an object database, and, second, using spatial\nproximity information derived from previous video frames. By\ncomparing the performance of Lowe's recognition algorithm with Itti \\&\nKoch's bottom-up attention model with and without search space\npruning, we demonstrate that this approach significantly accelerates\nfacial recognition in video footage.},\n journal = {Image and Vision Computing},\n volume = {24},\n number = {6},\n pages = {557-563},\n month = {Jun},\n year = {2006},\n file = {http://ilab.usc.edu/publications/doc/Bonaiuto_Itti06ivc.pdf},\n type = {bu ; cv},\n if = {2004 impact factor: 1.159}\n}</pre>\n</div>\n\n\n<div class=\"well well-small bibbase\" id=\"abstract_Bonaiuto_Itti06ivc\"\n style=\"display:none\">\n Bottom-up visual attention is the process by which primates quickly select regions of an image likely to contain behaviorally relevant objects. In artificial systems, restricting the task of object recognition to these regions allows faster recognition and unsupervised learning of multiple objects in cluttered scenes. A problem with this approach is that often objects that are superficially dissimilar to the target are given the same consideration in recognition as similar objects. Additionally, in video, objects recognized in previous frames at locations distant to the current fixation point often are given the same consideration in recognition as objects previously recognized at proximal locations. Here we investigate the value of rapidly pruning the facial recognition search space, first using similarity in the already-computed low-level features that guide attention to prioritize matching against an object database, and, second, using spatial proximity information derived from previous video frames. By comparing the performance of Lowe's recognition algorithm with Itti \\& Koch's bottom-up attention model with and without search space pruning, we demonstrate that this approach significantly accelerates facial recognition in video footage.\n</div>\n\n\n</div>\n","downloads":0,"bibbaseid":"bonaiuto-itti-theuseofattentionandspatialinformationforrapidfacialrecognitioninvideo-2006","role":"author","year":"2006","volume":"24","type":"bu ; cv","title":"The Use of Attention and Spatial Information for Rapid Facial Recognition in Video","pages":"557-563","number":"6","month":"Jun","key":"Bonaiuto_Itti06ivc","journal":"Image and Vision Computing","if":"2004 impact factor: 1.159","id":"Bonaiuto_Itti06ivc","file":"http://ilab.usc.edu/publications/doc/Bonaiuto_Itti06ivc.pdf","bibtype":"article","bibtex":"@article{ Bonaiuto_Itti06ivc,\n author = {J. Bonaiuto and L. Itti},\n title = {The Use of Attention and Spatial Information for Rapid Facial Recognition in Video},\n abstract = {Bottom-up visual attention is the process by which primates\nquickly select regions of an image likely to contain behaviorally\nrelevant objects. In artificial systems, restricting the task of\nobject recognition to these regions allows faster recognition and\nunsupervised learning of multiple objects in cluttered scenes. A\nproblem with this approach is that often objects that are\nsuperficially dissimilar to the target are given the same\nconsideration in recognition as similar objects. Additionally, in\nvideo, objects recognized in previous frames at locations distant to\nthe current fixation point often are given the same consideration in\nrecognition as objects previously recognized at proximal locations.\nHere we investigate the value of rapidly pruning the facial\nrecognition search space, first using similarity in the\nalready-computed low-level features that guide attention to prioritize\nmatching against an object database, and, second, using spatial\nproximity information derived from previous video frames. By\ncomparing the performance of Lowe's recognition algorithm with Itti \\&\nKoch's bottom-up attention model with and without search space\npruning, we demonstrate that this approach significantly accelerates\nfacial recognition in video footage.},\n journal = {Image and Vision Computing},\n volume = {24},\n number = {6},\n pages = {557-563},\n month = {Jun},\n year = {2006},\n file = {http://ilab.usc.edu/publications/doc/Bonaiuto_Itti06ivc.pdf},\n type = {bu ; cv},\n if = {2004 impact factor: 1.159}\n}","author_short":["Bonaiuto, J.","Itti, L."],"author":["Bonaiuto, J.","Itti, L."],"abstract":"Bottom-up visual attention is the process by which primates quickly select regions of an image likely to contain behaviorally relevant objects. In artificial systems, restricting the task of object recognition to these regions allows faster recognition and unsupervised learning of multiple objects in cluttered scenes. A problem with this approach is that often objects that are superficially dissimilar to the target are given the same consideration in recognition as similar objects. Additionally, in video, objects recognized in previous frames at locations distant to the current fixation point often are given the same consideration in recognition as objects previously recognized at proximal locations. Here we investigate the value of rapidly pruning the facial recognition search space, first using similarity in the already-computed low-level features that guide attention to prioritize matching against an object database, and, second, using spatial proximity information derived from previous video frames. By comparing the performance of Lowe's recognition algorithm with Itti \\& Koch's bottom-up attention model with and without search space pruning, we demonstrate that this approach significantly accelerates facial recognition in video footage."},"bibtype":"article","biburl":"http://ilab.usc.edu/publications/src/ilab.bib","downloads":0,"search_terms":["use","attention","spatial","information","rapid","facial","recognition","video","bonaiuto","itti"],"title":"The Use of Attention and Spatial Information for Rapid Facial Recognition in Video","year":2006,"dataSources":["wedBDxEpNXNCLZ2sZ"]}