Controllable Models of Gaze Behavior for Virtual Agents and Humanlike Robots. Andrist, S. In Proceedings of the 15th ACM on International Conference on Multimodal Interaction, of ICMI '13, pages 333--336, New York, NY, USA, 2013. ACM.
Controllable Models of Gaze Behavior for Virtual Agents and Humanlike Robots [link]Paper  doi  abstract   bibtex   
Embodied social agents, through their ability to afford embodied interaction using nonverbal human communicative cues, hold great promise in application areas such as education, training, rehabilitation, and collaborative work. Gaze cues are particularly important for achieving significant social and communicative goals. In this research, I explore how agents - both virtual agents and humanlike robots - might achieve such goals through the use of various gaze mechanisms. To this end, I am developing computational control models of gaze behavior that treat gaze as the output of a system with a number of multimodal inputs. These inputs can be characterized at different levels of interaction, from non-interactive (e.g., physical characteristics of the agent itself) to fully interactive (e.g., speech and gaze behavior of a human interlocutor). This research will result in a number of control models that each focus on a different gaze mechanism, combined into an open-source library of gaze behaviors that will be usable by both human-robot and human-virtual agent interaction designers. System-level evaluations in naturalistic settings will validate this gaze library for its ability to evoke positive social and cognitive responses in human users.
@inproceedings{andrist_controllable_2013,
	address = {New York, NY, USA},
	series = {{ICMI} '13},
	title = {Controllable {Models} of {Gaze} {Behavior} for {Virtual} {Agents} and {Humanlike} {Robots}},
	isbn = {978-1-4503-2129-7},
	url = {http://doi.acm.org/10.1145/2522848.2532194},
	doi = {10.1145/2522848.2532194},
	abstract = {Embodied social agents, through their ability to afford embodied interaction using nonverbal human communicative cues, hold great promise in application areas such as education, training, rehabilitation, and collaborative work. Gaze cues are particularly important for achieving significant social and communicative goals. In this research, I explore how agents - both virtual agents and humanlike robots - might achieve such goals through the use of various gaze mechanisms. To this end, I am developing computational control models of gaze behavior that treat gaze as the output of a system with a number of multimodal inputs. These inputs can be characterized at different levels of interaction, from non-interactive (e.g., physical characteristics of the agent itself) to fully interactive (e.g., speech and gaze behavior of a human interlocutor). This research will result in a number of control models that each focus on a different gaze mechanism, combined into an open-source library of gaze behaviors that will be usable by both human-robot and human-virtual agent interaction designers. System-level evaluations in naturalistic settings will validate this gaze library for its ability to evoke positive social and cognitive responses in human users.},
	urldate = {2014-06-05TZ},
	booktitle = {Proceedings of the 15th {ACM} on {International} {Conference} on {Multimodal} {Interaction}},
	publisher = {ACM},
	author = {Andrist, Sean},
	year = {2013},
	pages = {333--336}
}

Downloads: 0