Real-Time High-Performance Attention Focusing for Outdoors Mobile Beobots. Pichon, E. & Itti, L. In Proc. AAAI Spring Symposium, Stanford, CA (AAAI-TR-SS-02-04), pages 63, Mar, 2002. abstract bibtex When confronted with cluttered natural environments, animals still perform orders of magnitude better than artificial vision systems in tasks such as orienting, target detection, navigation and scene understanding. The recent widespread availability of significant computational resources, however, in particular through the deployment of so-called "Beowulf" clusters of low-cost personal computers, leaves us little excuse for the enormous gap still separating biological from machine vision systems. We describe a neuromorphic model of how our visual attention is attracted towards conspicuous locations in a visual scene. It replicates processing in posterior parietal cortex and other brain areas along the dorsal visual stream in the primate brain. The model includes a bottom-up (image-based) computation of low-level color, intensity, orientation and motion features, as well as a non-linear spatial competition which enhances salient locations in each of these feature channels. All feature channels feed into a unique scalar "saliency map" which controls where to next focus attention onto. Because it includes a detailed low-level vision front-end, the model has been applied not only to laboratory stimuli, but also to a wide variety of natural scenes. In addition to predicting a wealth of psychophysical experiments, the model demonstrated remarkable performance at detecting salient objects in outdoors imagery --- sometimes exceeding human performance --- despite wide variations in imaging conditions, targets to be detected, and environments. The present paper focuses on a recently completed parallelization of the model, which runs at 30 frames/s on a 16-CPU Beowulf cluster, and on the enhancement of this real-time model to include motion cues in addition to the previously studied color, intensity and orientation cues. The parallel model architecture and its deployment onto Linux Beowulf clusters are described, as well as several examples of applications to real-time outdoors color video streams. Implementation on a 4-CPU rugged high-speed mobile robot, a "Beobot," is also described. The model proves very robust at detecting salient targets from live video streams, despite large possible variations in illumination, rapid camera jitter, clutter, or omnipresent optical flow (e.g., when used on a moving vehicle). The success of this approach suggests that the neuromorphic architecture described may represent a robust and efficient real-time machine vision front-end, which can be used in conjunction with more detailed localized object recognition and identification algorithms to be applied at the selected salient locations.
@inproceedings{ Pichon_Itti02aaai,
author = { E. Pichon and L. Itti },
title = {Real-Time High-Performance Attention Focusing for Outdoors
Mobile Beobots},
year = {2002},
month = {Mar},
pages = {63},
booktitle = {Proc. AAAI Spring Symposium, Stanford, CA (AAAI-TR-SS-02-04)},
abstract = {When confronted with cluttered natural environments, animals still
perform orders of magnitude better than artificial vision systems in
tasks such as orienting, target detection, navigation and scene
understanding. The recent widespread availability of significant
computational resources, however, in particular through the deployment
of so-called "Beowulf" clusters of low-cost personal computers,
leaves us little excuse for the enormous gap still separating
biological from machine vision systems.
We describe a neuromorphic model of how our visual attention is
attracted towards conspicuous locations in a visual scene. It
replicates processing in posterior parietal cortex and other brain
areas along the dorsal visual stream in the primate brain. The model
includes a bottom-up (image-based) computation of low-level color,
intensity, orientation and motion features, as well as a non-linear
spatial competition which enhances salient locations in each of these
feature channels. All feature channels feed into a unique scalar
"saliency map" which controls where to next focus attention
onto. Because it includes a detailed low-level vision front-end, the
model has been applied not only to laboratory stimuli, but also to a
wide variety of natural scenes. In addition to predicting a wealth of
psychophysical experiments, the model demonstrated remarkable
performance at detecting salient objects in outdoors imagery ---
sometimes exceeding human performance --- despite wide variations in
imaging conditions, targets to be detected, and environments.
The present paper focuses on a recently completed parallelization of
the model, which runs at 30 frames/s on a 16-CPU Beowulf cluster, and
on the enhancement of this real-time model to include motion cues in
addition to the previously studied color, intensity and orientation
cues. The parallel model architecture and its deployment onto Linux
Beowulf clusters are described, as well as several examples of
applications to real-time outdoors color video streams. Implementation
on a 4-CPU rugged high-speed mobile robot, a "Beobot," is also
described. The model proves very robust at detecting salient targets
from live video streams, despite large possible variations in
illumination, rapid camera jitter, clutter, or omnipresent optical
flow (e.g., when used on a moving vehicle). The success of this
approach suggests that the neuromorphic architecture described may
represent a robust and efficient real-time machine vision front-end,
which can be used in conjunction with more detailed localized object
recognition and identification algorithms to be applied at the
selected salient locations.},
type = {mod;bu;cv;bb},
review = {abs/conf}
}
Downloads: 0
{"_id":{"_str":"5298a1a09eb585cc26000855"},"__v":0,"authorIDs":[],"author_short":["Pichon, E.","Itti, L."],"bibbaseid":"pichon-itti-realtimehighperformanceattentionfocusingforoutdoorsmobilebeobots-2002","bibdata":{"html":"<div class=\"bibbase_paper\"> \n\n\n<span class=\"bibbase_paper_titleauthoryear\">\n\t<span class=\"bibbase_paper_title\"><a name=\"Pichon_Itti02aaai\"> </a>Real-Time High-Performance Attention Focusing for Outdoors Mobile Beobots.</span>\n\t<span class=\"bibbase_paper_author\">\nPichon, E.; and Itti, L.</span>\n\t<!-- <span class=\"bibbase_paper_year\">2002</span>. -->\n</span>\n\n\n\nIn\n<i>Proc. AAAI Spring Symposium, Stanford, CA (AAAI-TR-SS-02-04)</i>, page 63, Mar 2002.\n\n\n\n\n\n<br class=\"bibbase_paper_content\"/>\n\n<span class=\"bibbase_paper_content\">\n \n \n \n <a href=\"javascript:showBib('Pichon_Itti02aaai')\"\n class=\"bibbase link\">\n <!-- <img src=\"http://www.bibbase.org/img/filetypes/bib.png\" -->\n\t<!-- alt=\"Real-Time High-Performance Attention Focusing for Outdoors Mobile Beobots [bib]\" -->\n\t<!-- class=\"bibbase_icon\" -->\n\t<!-- style=\"width: 24px; height: 24px; border: 0px; vertical-align: text-top\"><span class=\"bibbase_icon_text\">Bibtex</span> -->\n BibTeX\n <i class=\"fa fa-caret-down\"></i></a>\n \n \n \n <a class=\"bibbase_abstract_link bibbase link\"\n href=\"javascript:showAbstract('Pichon_Itti02aaai')\">\n Abstract\n <i class=\"fa fa-caret-down\"></i></a>\n \n \n \n\n \n \n \n</span>\n\n<div class=\"well well-small bibbase\" id=\"bib_Pichon_Itti02aaai\"\n style=\"display:none\">\n <pre>@inproceedings{ Pichon_Itti02aaai,\n author = { E. Pichon and L. Itti },\n title = {Real-Time High-Performance Attention Focusing for Outdoors\n Mobile Beobots},\n year = {2002},\n month = {Mar},\n pages = {63},\n booktitle = {Proc. AAAI Spring Symposium, Stanford, CA (AAAI-TR-SS-02-04)},\n abstract = {When confronted with cluttered natural environments, animals still\nperform orders of magnitude better than artificial vision systems in\ntasks such as orienting, target detection, navigation and scene\nunderstanding. The recent widespread availability of significant\ncomputational resources, however, in particular through the deployment\nof so-called \"Beowulf\" clusters of low-cost personal computers,\nleaves us little excuse for the enormous gap still separating\nbiological from machine vision systems.\nWe describe a neuromorphic model of how our visual attention is\nattracted towards conspicuous locations in a visual scene. It\nreplicates processing in posterior parietal cortex and other brain\nareas along the dorsal visual stream in the primate brain. The model\nincludes a bottom-up (image-based) computation of low-level color,\nintensity, orientation and motion features, as well as a non-linear\nspatial competition which enhances salient locations in each of these\nfeature channels. All feature channels feed into a unique scalar\n\"saliency map\" which controls where to next focus attention\nonto. Because it includes a detailed low-level vision front-end, the\nmodel has been applied not only to laboratory stimuli, but also to a\nwide variety of natural scenes. In addition to predicting a wealth of\npsychophysical experiments, the model demonstrated remarkable\nperformance at detecting salient objects in outdoors imagery ---\nsometimes exceeding human performance --- despite wide variations in\nimaging conditions, targets to be detected, and environments.\nThe present paper focuses on a recently completed parallelization of\nthe model, which runs at 30 frames/s on a 16-CPU Beowulf cluster, and\non the enhancement of this real-time model to include motion cues in\naddition to the previously studied color, intensity and orientation\ncues. The parallel model architecture and its deployment onto Linux\nBeowulf clusters are described, as well as several examples of\napplications to real-time outdoors color video streams. Implementation\non a 4-CPU rugged high-speed mobile robot, a \"Beobot,\" is also\ndescribed. The model proves very robust at detecting salient targets\nfrom live video streams, despite large possible variations in\nillumination, rapid camera jitter, clutter, or omnipresent optical\nflow (e.g., when used on a moving vehicle). The success of this\napproach suggests that the neuromorphic architecture described may\nrepresent a robust and efficient real-time machine vision front-end,\nwhich can be used in conjunction with more detailed localized object\nrecognition and identification algorithms to be applied at the\nselected salient locations.},\n type = {mod;bu;cv;bb},\n review = {abs/conf}\n}</pre>\n</div>\n\n\n<div class=\"well well-small bibbase\" id=\"abstract_Pichon_Itti02aaai\"\n style=\"display:none\">\n When confronted with cluttered natural environments, animals still perform orders of magnitude better than artificial vision systems in tasks such as orienting, target detection, navigation and scene understanding. The recent widespread availability of significant computational resources, however, in particular through the deployment of so-called \"Beowulf\" clusters of low-cost personal computers, leaves us little excuse for the enormous gap still separating biological from machine vision systems. We describe a neuromorphic model of how our visual attention is attracted towards conspicuous locations in a visual scene. It replicates processing in posterior parietal cortex and other brain areas along the dorsal visual stream in the primate brain. The model includes a bottom-up (image-based) computation of low-level color, intensity, orientation and motion features, as well as a non-linear spatial competition which enhances salient locations in each of these feature channels. All feature channels feed into a unique scalar \"saliency map\" which controls where to next focus attention onto. Because it includes a detailed low-level vision front-end, the model has been applied not only to laboratory stimuli, but also to a wide variety of natural scenes. In addition to predicting a wealth of psychophysical experiments, the model demonstrated remarkable performance at detecting salient objects in outdoors imagery --- sometimes exceeding human performance --- despite wide variations in imaging conditions, targets to be detected, and environments. The present paper focuses on a recently completed parallelization of the model, which runs at 30 frames/s on a 16-CPU Beowulf cluster, and on the enhancement of this real-time model to include motion cues in addition to the previously studied color, intensity and orientation cues. The parallel model architecture and its deployment onto Linux Beowulf clusters are described, as well as several examples of applications to real-time outdoors color video streams. Implementation on a 4-CPU rugged high-speed mobile robot, a \"Beobot,\" is also described. The model proves very robust at detecting salient targets from live video streams, despite large possible variations in illumination, rapid camera jitter, clutter, or omnipresent optical flow (e.g., when used on a moving vehicle). The success of this approach suggests that the neuromorphic architecture described may represent a robust and efficient real-time machine vision front-end, which can be used in conjunction with more detailed localized object recognition and identification algorithms to be applied at the selected salient locations.\n</div>\n\n\n</div>\n","downloads":0,"bibbaseid":"pichon-itti-realtimehighperformanceattentionfocusingforoutdoorsmobilebeobots-2002","role":"author","year":"2002","type":"mod;bu;cv;bb","title":"Real-Time High-Performance Attention Focusing for Outdoors Mobile Beobots","review":"abs/conf","pages":"63","month":"Mar","key":"Pichon_Itti02aaai","id":"Pichon_Itti02aaai","booktitle":"Proc. AAAI Spring Symposium, Stanford, CA (AAAI-TR-SS-02-04)","bibtype":"inproceedings","bibtex":"@inproceedings{ Pichon_Itti02aaai,\n author = { E. Pichon and L. Itti },\n title = {Real-Time High-Performance Attention Focusing for Outdoors\n Mobile Beobots},\n year = {2002},\n month = {Mar},\n pages = {63},\n booktitle = {Proc. AAAI Spring Symposium, Stanford, CA (AAAI-TR-SS-02-04)},\n abstract = {When confronted with cluttered natural environments, animals still\nperform orders of magnitude better than artificial vision systems in\ntasks such as orienting, target detection, navigation and scene\nunderstanding. The recent widespread availability of significant\ncomputational resources, however, in particular through the deployment\nof so-called \"Beowulf\" clusters of low-cost personal computers,\nleaves us little excuse for the enormous gap still separating\nbiological from machine vision systems.\nWe describe a neuromorphic model of how our visual attention is\nattracted towards conspicuous locations in a visual scene. It\nreplicates processing in posterior parietal cortex and other brain\nareas along the dorsal visual stream in the primate brain. The model\nincludes a bottom-up (image-based) computation of low-level color,\nintensity, orientation and motion features, as well as a non-linear\nspatial competition which enhances salient locations in each of these\nfeature channels. All feature channels feed into a unique scalar\n\"saliency map\" which controls where to next focus attention\nonto. Because it includes a detailed low-level vision front-end, the\nmodel has been applied not only to laboratory stimuli, but also to a\nwide variety of natural scenes. In addition to predicting a wealth of\npsychophysical experiments, the model demonstrated remarkable\nperformance at detecting salient objects in outdoors imagery ---\nsometimes exceeding human performance --- despite wide variations in\nimaging conditions, targets to be detected, and environments.\nThe present paper focuses on a recently completed parallelization of\nthe model, which runs at 30 frames/s on a 16-CPU Beowulf cluster, and\non the enhancement of this real-time model to include motion cues in\naddition to the previously studied color, intensity and orientation\ncues. The parallel model architecture and its deployment onto Linux\nBeowulf clusters are described, as well as several examples of\napplications to real-time outdoors color video streams. Implementation\non a 4-CPU rugged high-speed mobile robot, a \"Beobot,\" is also\ndescribed. The model proves very robust at detecting salient targets\nfrom live video streams, despite large possible variations in\nillumination, rapid camera jitter, clutter, or omnipresent optical\nflow (e.g., when used on a moving vehicle). The success of this\napproach suggests that the neuromorphic architecture described may\nrepresent a robust and efficient real-time machine vision front-end,\nwhich can be used in conjunction with more detailed localized object\nrecognition and identification algorithms to be applied at the\nselected salient locations.},\n type = {mod;bu;cv;bb},\n review = {abs/conf}\n}","author_short":["Pichon, E.","Itti, L."],"author":["Pichon, E.","Itti, L."],"abstract":"When confronted with cluttered natural environments, animals still perform orders of magnitude better than artificial vision systems in tasks such as orienting, target detection, navigation and scene understanding. The recent widespread availability of significant computational resources, however, in particular through the deployment of so-called \"Beowulf\" clusters of low-cost personal computers, leaves us little excuse for the enormous gap still separating biological from machine vision systems. We describe a neuromorphic model of how our visual attention is attracted towards conspicuous locations in a visual scene. It replicates processing in posterior parietal cortex and other brain areas along the dorsal visual stream in the primate brain. The model includes a bottom-up (image-based) computation of low-level color, intensity, orientation and motion features, as well as a non-linear spatial competition which enhances salient locations in each of these feature channels. All feature channels feed into a unique scalar \"saliency map\" which controls where to next focus attention onto. Because it includes a detailed low-level vision front-end, the model has been applied not only to laboratory stimuli, but also to a wide variety of natural scenes. In addition to predicting a wealth of psychophysical experiments, the model demonstrated remarkable performance at detecting salient objects in outdoors imagery --- sometimes exceeding human performance --- despite wide variations in imaging conditions, targets to be detected, and environments. The present paper focuses on a recently completed parallelization of the model, which runs at 30 frames/s on a 16-CPU Beowulf cluster, and on the enhancement of this real-time model to include motion cues in addition to the previously studied color, intensity and orientation cues. The parallel model architecture and its deployment onto Linux Beowulf clusters are described, as well as several examples of applications to real-time outdoors color video streams. Implementation on a 4-CPU rugged high-speed mobile robot, a \"Beobot,\" is also described. The model proves very robust at detecting salient targets from live video streams, despite large possible variations in illumination, rapid camera jitter, clutter, or omnipresent optical flow (e.g., when used on a moving vehicle). The success of this approach suggests that the neuromorphic architecture described may represent a robust and efficient real-time machine vision front-end, which can be used in conjunction with more detailed localized object recognition and identification algorithms to be applied at the selected salient locations."},"bibtype":"inproceedings","biburl":"http://ilab.usc.edu/publications/src/ilab.bib","downloads":0,"search_terms":["real","time","high","performance","attention","focusing","outdoors","mobile","beobots","pichon","itti"],"title":"Real-Time High-Performance Attention Focusing for Outdoors Mobile Beobots","year":2002,"dataSources":["wedBDxEpNXNCLZ2sZ"]}