Neuromorphic vision and attention for mobile robots. Itti, L. Oct 2007. abstract bibtex In recent years, a number of neurally-inspired computational models have emerged which demonstrate unparalleled performance, flexibility, and adaptability in coping with real-world inputs. In the visual domain, in particular, such models are achieving great strides in tasks including focusing attention onto the most important locations in a scene, recognizing attended objects, computing contextual information in the form of the "gist" of the scene, and planning/executing visually-guided motor actions, among many other functions. However, these models have not yet been able to demonstrate much higher-level or cognitive computation ability. On the other hand, symbolic models from artificial intelligence have reached significant maturity in their cognitive reasoning abilities, but the worlds in which they can operate have been necessarily simplified (e.g., a chess board, a virtual maze). In this talk I will present the latest developments in our and other laboratories which attempt to bridge the gap between these two disciplines, neural modeling and artificial intelligence, in developing the next generation of robots. I will briefly review a number of efforts which aim at building models that can both process real-world inputs in robust and flexible ways, and perform cognitive reasoning on the symbols extracted from these inputs. I will draw from examples in the biological/computer vision fields, including algorithms for complex scene understanding, and for robot navigation.
@invited{ Itti07iros,
author = {L. Itti},
title = {Neuromorphic vision and attention for mobile robots},
abstract = {In recent years, a number of neurally-inspired computational
models have emerged which demonstrate unparalleled
performance, flexibility, and adaptability in coping
with real-world inputs. In the visual domain, in
particular, such models are achieving great strides
in tasks including focusing attention onto the most
important locations in a scene, recognizing attended
objects, computing contextual information in the
form of the "gist" of the scene, and
planning/executing visually-guided motor actions,
among many other functions. However, these models
have not yet been able to demonstrate much
higher-level or cognitive computation ability. On
the other hand, symbolic models from artificial
intelligence have reached significant maturity in
their cognitive reasoning abilities, but the worlds
in which they can operate have been necessarily
simplified (e.g., a chess board, a virtual maze). In
this talk I will present the latest developments in
our and other laboratories which attempt to bridge
the gap between these two disciplines, neural
modeling and artificial intelligence, in developing
the next generation of robots. I will briefly review
a number of efforts which aim at building models
that can both process real-world inputs in robust
and flexible ways, and perform cognitive reasoning
on the symbols extracted from these inputs. I will
draw from examples in the biological/computer vision
fields, including algorithms for complex scene
understanding, and for robot navigation.},
booktitle = {IEEE/RSJ IROS 2007 Workshop: From sensors to human spatial
concepts, San Diego, CA},
month = {Oct},
year = {2007},
type = {bu;td;mod}
}
Downloads: 0
{"_id":{"_str":"5298a1a19eb585cc26000939"},"__v":0,"authorIDs":[],"author_short":["Itti, L."],"bibbaseid":"itti-neuromorphicvisionandattentionformobilerobots-2007","bibdata":{"html":"<div class=\"bibbase_paper\"> \n\n\n<span class=\"bibbase_paper_titleauthoryear\">\n\t<span class=\"bibbase_paper_title\"><a name=\"Itti07iros\"> </a>Neuromorphic vision and attention for mobile robots.</span>\n\t<span class=\"bibbase_paper_author\">\nItti, L.</span>\n\t<!-- <span class=\"bibbase_paper_year\">2007</span>. -->\n</span>\n\n\n\nOct 2007.\n\n\n\n\n<br class=\"bibbase_paper_content\"/>\n\n<span class=\"bibbase_paper_content\">\n \n \n \n <a href=\"javascript:showBib('Itti07iros')\"\n class=\"bibbase link\">\n <!-- <img src=\"http://www.bibbase.org/img/filetypes/bib.png\" -->\n\t<!-- alt=\"Neuromorphic vision and attention for mobile robots [bib]\" -->\n\t<!-- class=\"bibbase_icon\" -->\n\t<!-- style=\"width: 24px; height: 24px; border: 0px; vertical-align: text-top\"><span class=\"bibbase_icon_text\">Bibtex</span> -->\n BibTeX\n <i class=\"fa fa-caret-down\"></i></a>\n \n \n \n <a class=\"bibbase_abstract_link bibbase link\"\n href=\"javascript:showAbstract('Itti07iros')\">\n Abstract\n <i class=\"fa fa-caret-down\"></i></a>\n \n \n \n\n \n \n \n</span>\n\n<div class=\"well well-small bibbase\" id=\"bib_Itti07iros\"\n style=\"display:none\">\n <pre>@invited{ Itti07iros,\n author = {L. Itti},\n title = {Neuromorphic vision and attention for mobile robots},\n abstract = {In recent years, a number of neurally-inspired computational\n models have emerged which demonstrate unparalleled\n performance, flexibility, and adaptability in coping\n with real-world inputs. In the visual domain, in\n particular, such models are achieving great strides\n in tasks including focusing attention onto the most\n important locations in a scene, recognizing attended\n objects, computing contextual information in the\n form of the \"gist\" of the scene, and\n planning/executing visually-guided motor actions,\n among many other functions. However, these models\n have not yet been able to demonstrate much\n higher-level or cognitive computation ability. On\n the other hand, symbolic models from artificial\n intelligence have reached significant maturity in\n their cognitive reasoning abilities, but the worlds\n in which they can operate have been necessarily\n simplified (e.g., a chess board, a virtual maze). In\n this talk I will present the latest developments in\n our and other laboratories which attempt to bridge\n the gap between these two disciplines, neural\n modeling and artificial intelligence, in developing\n the next generation of robots. I will briefly review\n a number of efforts which aim at building models\n that can both process real-world inputs in robust\n and flexible ways, and perform cognitive reasoning\n on the symbols extracted from these inputs. I will\n draw from examples in the biological/computer vision\n fields, including algorithms for complex scene\n understanding, and for robot navigation.},\n booktitle = {IEEE/RSJ IROS 2007 Workshop: From sensors to human spatial \nconcepts, San Diego, CA},\n month = {Oct},\n year = {2007},\n type = {bu;td;mod}\n}</pre>\n</div>\n\n\n<div class=\"well well-small bibbase\" id=\"abstract_Itti07iros\"\n style=\"display:none\">\n In recent years, a number of neurally-inspired computational models have emerged which demonstrate unparalleled performance, flexibility, and adaptability in coping with real-world inputs. In the visual domain, in particular, such models are achieving great strides in tasks including focusing attention onto the most important locations in a scene, recognizing attended objects, computing contextual information in the form of the \"gist\" of the scene, and planning/executing visually-guided motor actions, among many other functions. However, these models have not yet been able to demonstrate much higher-level or cognitive computation ability. On the other hand, symbolic models from artificial intelligence have reached significant maturity in their cognitive reasoning abilities, but the worlds in which they can operate have been necessarily simplified (e.g., a chess board, a virtual maze). In this talk I will present the latest developments in our and other laboratories which attempt to bridge the gap between these two disciplines, neural modeling and artificial intelligence, in developing the next generation of robots. I will briefly review a number of efforts which aim at building models that can both process real-world inputs in robust and flexible ways, and perform cognitive reasoning on the symbols extracted from these inputs. I will draw from examples in the biological/computer vision fields, including algorithms for complex scene understanding, and for robot navigation.\n</div>\n\n\n</div>\n","downloads":0,"bibbaseid":"itti-neuromorphicvisionandattentionformobilerobots-2007","role":"author","year":"2007","type":"bu;td;mod","title":"Neuromorphic vision and attention for mobile robots","month":"Oct","key":"Itti07iros","id":"Itti07iros","booktitle":"IEEE/RSJ IROS 2007 Workshop: From sensors to human spatial concepts, San Diego, CA","bibtype":"invited","bibtex":"@invited{ Itti07iros,\n author = {L. Itti},\n title = {Neuromorphic vision and attention for mobile robots},\n abstract = {In recent years, a number of neurally-inspired computational\n models have emerged which demonstrate unparalleled\n performance, flexibility, and adaptability in coping\n with real-world inputs. In the visual domain, in\n particular, such models are achieving great strides\n in tasks including focusing attention onto the most\n important locations in a scene, recognizing attended\n objects, computing contextual information in the\n form of the \"gist\" of the scene, and\n planning/executing visually-guided motor actions,\n among many other functions. However, these models\n have not yet been able to demonstrate much\n higher-level or cognitive computation ability. On\n the other hand, symbolic models from artificial\n intelligence have reached significant maturity in\n their cognitive reasoning abilities, but the worlds\n in which they can operate have been necessarily\n simplified (e.g., a chess board, a virtual maze). In\n this talk I will present the latest developments in\n our and other laboratories which attempt to bridge\n the gap between these two disciplines, neural\n modeling and artificial intelligence, in developing\n the next generation of robots. I will briefly review\n a number of efforts which aim at building models\n that can both process real-world inputs in robust\n and flexible ways, and perform cognitive reasoning\n on the symbols extracted from these inputs. I will\n draw from examples in the biological/computer vision\n fields, including algorithms for complex scene\n understanding, and for robot navigation.},\n booktitle = {IEEE/RSJ IROS 2007 Workshop: From sensors to human spatial \nconcepts, San Diego, CA},\n month = {Oct},\n year = {2007},\n type = {bu;td;mod}\n}","author_short":["Itti, L."],"author":["Itti, L."],"abstract":"In recent years, a number of neurally-inspired computational models have emerged which demonstrate unparalleled performance, flexibility, and adaptability in coping with real-world inputs. In the visual domain, in particular, such models are achieving great strides in tasks including focusing attention onto the most important locations in a scene, recognizing attended objects, computing contextual information in the form of the \"gist\" of the scene, and planning/executing visually-guided motor actions, among many other functions. However, these models have not yet been able to demonstrate much higher-level or cognitive computation ability. On the other hand, symbolic models from artificial intelligence have reached significant maturity in their cognitive reasoning abilities, but the worlds in which they can operate have been necessarily simplified (e.g., a chess board, a virtual maze). In this talk I will present the latest developments in our and other laboratories which attempt to bridge the gap between these two disciplines, neural modeling and artificial intelligence, in developing the next generation of robots. I will briefly review a number of efforts which aim at building models that can both process real-world inputs in robust and flexible ways, and perform cognitive reasoning on the symbols extracted from these inputs. I will draw from examples in the biological/computer vision fields, including algorithms for complex scene understanding, and for robot navigation."},"bibtype":"invited","biburl":"http://ilab.usc.edu/publications/src/ilab.bib","downloads":0,"search_terms":["neuromorphic","vision","attention","mobile","robots","itti"],"title":"Neuromorphic vision and attention for mobile robots","year":2007,"dataSources":["wedBDxEpNXNCLZ2sZ"]}