Neuromorphic vision and attention for mobile robots. Itti, L. Oct 2007.
abstract   bibtex   
In recent years, a number of neurally-inspired computational models have emerged which demonstrate unparalleled performance, flexibility, and adaptability in coping with real-world inputs. In the visual domain, in particular, such models are achieving great strides in tasks including focusing attention onto the most important locations in a scene, recognizing attended objects, computing contextual information in the form of the "gist" of the scene, and planning/executing visually-guided motor actions, among many other functions. However, these models have not yet been able to demonstrate much higher-level or cognitive computation ability. On the other hand, symbolic models from artificial intelligence have reached significant maturity in their cognitive reasoning abilities, but the worlds in which they can operate have been necessarily simplified (e.g., a chess board, a virtual maze). In this talk I will present the latest developments in our and other laboratories which attempt to bridge the gap between these two disciplines, neural modeling and artificial intelligence, in developing the next generation of robots. I will briefly review a number of efforts which aim at building models that can both process real-world inputs in robust and flexible ways, and perform cognitive reasoning on the symbols extracted from these inputs. I will draw from examples in the biological/computer vision fields, including algorithms for complex scene understanding, and for robot navigation.
@invited{ Itti07iros,
  author = {L. Itti},
  title = {Neuromorphic vision and attention for mobile robots},
  abstract = {In recent years, a number of neurally-inspired computational
                  models have emerged which demonstrate unparalleled
                  performance, flexibility, and adaptability in coping
                  with real-world inputs. In the visual domain, in
                  particular, such models are achieving great strides
                  in tasks including focusing attention onto the most
                  important locations in a scene, recognizing attended
                  objects, computing contextual information in the
                  form of the "gist" of the scene, and
                  planning/executing visually-guided motor actions,
                  among many other functions. However, these models
                  have not yet been able to demonstrate much
                  higher-level or cognitive computation ability. On
                  the other hand, symbolic models from artificial
                  intelligence have reached significant maturity in
                  their cognitive reasoning abilities, but the worlds
                  in which they can operate have been necessarily
                  simplified (e.g., a chess board, a virtual maze). In
                  this talk I will present the latest developments in
                  our and other laboratories which attempt to bridge
                  the gap between these two disciplines, neural
                  modeling and artificial intelligence, in developing
                  the next generation of robots. I will briefly review
                  a number of efforts which aim at building models
                  that can both process real-world inputs in robust
                  and flexible ways, and perform cognitive reasoning
                  on the symbols extracted from these inputs. I will
                  draw from examples in the biological/computer vision
                  fields, including algorithms for complex scene
                  understanding, and for robot navigation.},
  booktitle = {IEEE/RSJ IROS 2007 Workshop: From sensors to human spatial 
concepts, San Diego, CA},
  month = {Oct},
  year = {2007},
  type = {bu;td;mod}
}

Downloads: 0