The roles of haptic-ostensive referring expressions in cooperative, task-based human-robot dialogue. Foster, M., Bard, E., Guhe, M., Hill, R., Oberlander, J., & Knoll, A. In Proceedings of the 3rd ACM/IEEE international conference on Human Robot Interaction (HRI '08), pages 295–302, 2008. ACM.
doi  abstract   bibtex   
Generating referring expressions is a task that has received a great deal of attention in the natural-language generation community, with an increasing amount of recent effort targeted at the generation of multimodal referring expressions. However, most implemented systems tend to assume very little shared knowledge between the speaker and the hearer, and therefore must generate fully-elaborated linguistic references. Some systems do include a representation of the physical context or the dialogue context; however, other sources of contextual information are not normally used. Also, the generated references normally consist only of language and, possibly, deictic pointing gestures.When referring to objects in the context of a task-based interaction involving jointly manipulating objects, a much richer notion of context is available, which permits a wider range of referring options. In particular, when conversational partners cooperate on a mutual task in a shared environment, objects can be made accessible simply by manipulating them as part of the task. We demonstrate that such expressions are common in a corpus of human-human dialogues based on constructing virtual objects, and then describe how this type of reference can be incorporated into the output of a humanoid robot that engages in similar joint construction dialogues with a human partner.
@inproceedings{e72b6d626f6f48ac8d3cd7f9a3ed9351,
  title     = "The roles of haptic-ostensive referring expressions in cooperative, task-based human-robot dialogue",
  abstract  = "Generating referring expressions is a task that has received a great deal of attention in the natural-language generation community, with an increasing amount of recent effort targeted at the generation of multimodal referring expressions. However, most implemented systems tend to assume very little shared knowledge between the speaker and the hearer, and therefore must generate fully-elaborated linguistic references. Some systems do include a representation of the physical context or the dialogue context; however, other sources of contextual information are not normally used. Also, the generated references normally consist only of language and, possibly, deictic pointing gestures.When referring to objects in the context of a task-based interaction involving jointly manipulating objects, a much richer notion of context is available, which permits a wider range of referring options. In particular, when conversational partners cooperate on a mutual task in a shared environment, objects can be made accessible simply by manipulating them as part of the task. We demonstrate that such expressions are common in a corpus of human-human dialogues based on constructing virtual objects, and then describe how this type of reference can be incorporated into the output of a humanoid robot that engages in similar joint construction dialogues with a human partner.",
  keywords  = "multimodal dialogue, referring expressions",
  author    = "Foster, {Mary Ellen} and Bard, {Ellen Gurman} and Markus Guhe and Hill, {Robin L.} and Jon Oberlander and Alois Knoll",
  year      = "2008",
  doi       = "10.1145/1349822.1349861",
  language  = "English",
  isbn      = "978-1-60558-017-3",
  pages     = "295--302",
  booktitle = "Proceedings of the 3rd ACM/IEEE international conference on Human Robot Interaction (HRI '08)",
  publisher = "ACM",
}

Downloads: 0