The Blue One to the Left: Enabling Expressive User Interaction in a Multimodal Interface for Object Selection in Virtual 3D Environments. Budhiraja, P. & Madhvanath, S. In Proceedings of the 14th ACM International Conference on Multimodal Interaction, of ICMI '12, pages 57--58, New York, NY, USA, 2012. ACM.
The Blue One to the Left: Enabling Expressive User Interaction in a Multimodal Interface for Object Selection in Virtual 3D Environments [link]Paper  doi  abstract   bibtex   
Interaction with virtual 3D environments comes with a host of challenges. For instance, because 3D objects tend to occlude one another, performing object selection by pointing gestures is problematic, and more so when there are many objects in the scene. In the real world we tend to use speech to clarify our intent, by referring to distinctive attributes of the object and/or its absolute or relative location in space. Multimodal interactive systems involving speech and gesture have generally relied on speech for commands and deictic gestures for indicating the target object. In this paper, we present a system which allows object references to be made using gestures and speech, and supports a variety of expressions inspired by real-world usage.
@inproceedings{budhiraja_blue_2012,
	address = {New York, NY, USA},
	series = {{ICMI} '12},
	title = {The {Blue} {One} to the {Left}: {Enabling} {Expressive} {User} {Interaction} in a {Multimodal} {Interface} for {Object} {Selection} in {Virtual} 3D {Environments}},
	isbn = {978-1-4503-1467-1},
	shorttitle = {The {Blue} {One} to the {Left}},
	url = {http://doi.acm.org/10.1145/2388676.2388691},
	doi = {10.1145/2388676.2388691},
	abstract = {Interaction with virtual 3D environments comes with a host of challenges. For instance, because 3D objects tend to occlude one another, performing object selection by pointing gestures is problematic, and more so when there are many objects in the scene. In the real world we tend to use speech to clarify our intent, by referring to distinctive attributes of the object and/or its absolute or relative location in space. Multimodal interactive systems involving speech and gesture have generally relied on speech for commands and deictic gestures for indicating the target object. In this paper, we present a system which allows object references to be made using gestures and speech, and supports a variety of expressions inspired by real-world usage.},
	urldate = {2014-06-05TZ},
	booktitle = {Proceedings of the 14th {ACM} {International} {Conference} on {Multimodal} {Interaction}},
	publisher = {ACM},
	author = {Budhiraja, Pulkit and Madhvanath, Sriganesh},
	year = {2012},
	pages = {57--58}
}

Downloads: 0