Automatic Eye and Head Animation for Animats. Itti, L. Jul 2004. Plenary lectureabstract bibtex We propose a computational model for the automatic animation of the eyes and head of virtual or physical avatars. Given any input in the form of video streams, the model finds the most salient (interesting) locations in the agent's visual environment and directs its gaze towards them. The computation of visual salience at the basis of the model relies on a neurobiological model of visual processing along the occipito-parietal stream in the primate brain. The relative contributions of eyes and head towards a given gaze shift are then computed from a gaze decomposition model derived from behavioral recordings in Rhesus monkeys. Finally, the dynamics of eye and head movements are calibrated against behavioral recordings from human subjects. The model autonomously gazes towards locations also gazed to by human observers watching the same video inputs, in a highly significant manner.
@invited{ Itti04sab,
author = {L. Itti},
title = {Automatic Eye and Head Animation for Animats},
abstract = {We propose a computational model for the automatic animation
of the eyes and head of virtual or physical avatars. Given any input
in the form of video streams, the model finds the most salient
(interesting) locations in the agent's visual environment and directs
its gaze towards them. The computation of visual salience at the basis
of the model relies on a neurobiological model of visual processing
along the occipito-parietal stream in the primate brain. The relative
contributions of eyes and head towards a given gaze shift are then
computed from a gaze decomposition model derived from behavioral
recordings in Rhesus monkeys. Finally, the dynamics of eye and head
movements are calibrated against behavioral recordings from human
subjects. The model autonomously gazes towards locations also gazed to
by human observers watching the same video inputs, in a highly
significant manner.},
booktitle = {From Animals to Animats 8, Proceedings of the Eighth
International Conference on the Simulation of Autonomous Behavior,
Santa Monica, California},
month = {Jul},
year = {2004},
type = {cv ; bu ; mod},
note = {Plenary lecture}
}
Downloads: 0
{"_id":{"_str":"5298a1a09eb585cc260008a1"},"__v":0,"authorIDs":[],"author_short":["Itti, L."],"bibbaseid":"itti-automaticeyeandheadanimationforanimats-2004","bibdata":{"html":"<div class=\"bibbase_paper\"> \n\n\n<span class=\"bibbase_paper_titleauthoryear\">\n\t<span class=\"bibbase_paper_title\"><a name=\"Itti04sab\"> </a>Automatic Eye and Head Animation for Animats.</span>\n\t<span class=\"bibbase_paper_author\">\nItti, L.</span>\n\t<!-- <span class=\"bibbase_paper_year\">2004</span>. -->\n</span>\n\n\n\nJul 2004.\n\n\nPlenary lecture.\n\n<br class=\"bibbase_paper_content\"/>\n\n<span class=\"bibbase_paper_content\">\n \n \n \n <a href=\"javascript:showBib('Itti04sab')\"\n class=\"bibbase link\">\n <!-- <img src=\"http://www.bibbase.org/img/filetypes/bib.png\" -->\n\t<!-- alt=\"Automatic Eye and Head Animation for Animats [bib]\" -->\n\t<!-- class=\"bibbase_icon\" -->\n\t<!-- style=\"width: 24px; height: 24px; border: 0px; vertical-align: text-top\"><span class=\"bibbase_icon_text\">Bibtex</span> -->\n BibTeX\n <i class=\"fa fa-caret-down\"></i></a>\n \n \n \n <a class=\"bibbase_abstract_link bibbase link\"\n href=\"javascript:showAbstract('Itti04sab')\">\n Abstract\n <i class=\"fa fa-caret-down\"></i></a>\n \n \n \n\n \n \n \n</span>\n\n<div class=\"well well-small bibbase\" id=\"bib_Itti04sab\"\n style=\"display:none\">\n <pre>@invited{ Itti04sab,\n author = {L. Itti},\n title = {Automatic Eye and Head Animation for Animats},\n abstract = {We propose a computational model for the automatic animation\nof the eyes and head of virtual or physical avatars. Given any input\nin the form of video streams, the model finds the most salient\n(interesting) locations in the agent's visual environment and directs\nits gaze towards them. The computation of visual salience at the basis\nof the model relies on a neurobiological model of visual processing\nalong the occipito-parietal stream in the primate brain. The relative\ncontributions of eyes and head towards a given gaze shift are then\ncomputed from a gaze decomposition model derived from behavioral\nrecordings in Rhesus monkeys. Finally, the dynamics of eye and head\nmovements are calibrated against behavioral recordings from human\nsubjects. The model autonomously gazes towards locations also gazed to\nby human observers watching the same video inputs, in a highly\nsignificant manner.},\n booktitle = {From Animals to Animats 8, Proceedings of the Eighth\nInternational Conference on the Simulation of Autonomous Behavior,\nSanta Monica, California},\n month = {Jul},\n year = {2004},\n type = {cv ; bu ; mod},\n note = {Plenary lecture}\n}</pre>\n</div>\n\n\n<div class=\"well well-small bibbase\" id=\"abstract_Itti04sab\"\n style=\"display:none\">\n We propose a computational model for the automatic animation of the eyes and head of virtual or physical avatars. Given any input in the form of video streams, the model finds the most salient (interesting) locations in the agent's visual environment and directs its gaze towards them. The computation of visual salience at the basis of the model relies on a neurobiological model of visual processing along the occipito-parietal stream in the primate brain. The relative contributions of eyes and head towards a given gaze shift are then computed from a gaze decomposition model derived from behavioral recordings in Rhesus monkeys. Finally, the dynamics of eye and head movements are calibrated against behavioral recordings from human subjects. The model autonomously gazes towards locations also gazed to by human observers watching the same video inputs, in a highly significant manner.\n</div>\n\n\n</div>\n","downloads":0,"bibbaseid":"itti-automaticeyeandheadanimationforanimats-2004","role":"author","year":"2004","type":"cv ; bu ; mod","title":"Automatic Eye and Head Animation for Animats","note":"Plenary lecture","month":"Jul","key":"Itti04sab","id":"Itti04sab","booktitle":"From Animals to Animats 8, Proceedings of the Eighth International Conference on the Simulation of Autonomous Behavior, Santa Monica, California","bibtype":"invited","bibtex":"@invited{ Itti04sab,\n author = {L. Itti},\n title = {Automatic Eye and Head Animation for Animats},\n abstract = {We propose a computational model for the automatic animation\nof the eyes and head of virtual or physical avatars. Given any input\nin the form of video streams, the model finds the most salient\n(interesting) locations in the agent's visual environment and directs\nits gaze towards them. The computation of visual salience at the basis\nof the model relies on a neurobiological model of visual processing\nalong the occipito-parietal stream in the primate brain. The relative\ncontributions of eyes and head towards a given gaze shift are then\ncomputed from a gaze decomposition model derived from behavioral\nrecordings in Rhesus monkeys. Finally, the dynamics of eye and head\nmovements are calibrated against behavioral recordings from human\nsubjects. The model autonomously gazes towards locations also gazed to\nby human observers watching the same video inputs, in a highly\nsignificant manner.},\n booktitle = {From Animals to Animats 8, Proceedings of the Eighth\nInternational Conference on the Simulation of Autonomous Behavior,\nSanta Monica, California},\n month = {Jul},\n year = {2004},\n type = {cv ; bu ; mod},\n note = {Plenary lecture}\n}","author_short":["Itti, L."],"author":["Itti, L."],"abstract":"We propose a computational model for the automatic animation of the eyes and head of virtual or physical avatars. Given any input in the form of video streams, the model finds the most salient (interesting) locations in the agent's visual environment and directs its gaze towards them. The computation of visual salience at the basis of the model relies on a neurobiological model of visual processing along the occipito-parietal stream in the primate brain. The relative contributions of eyes and head towards a given gaze shift are then computed from a gaze decomposition model derived from behavioral recordings in Rhesus monkeys. Finally, the dynamics of eye and head movements are calibrated against behavioral recordings from human subjects. The model autonomously gazes towards locations also gazed to by human observers watching the same video inputs, in a highly significant manner."},"bibtype":"invited","biburl":"http://ilab.usc.edu/publications/src/ilab.bib","downloads":0,"search_terms":["automatic","eye","head","animation","animats","itti"],"title":"Automatic Eye and Head Animation for Animats","year":2004,"dataSources":["wedBDxEpNXNCLZ2sZ"]}