Using multimodal interaction to navigate in arbitrary virtual VRML worlds. Althoff, F., McGlaun, G., Schuller, B., Morguet, P., & Lang, M. In Proceedings of the 2001 workshop on Percetive user interfaces - PUI '01, pages 1, Orlando, Florida, 2001. ACM Press.
Using multimodal interaction to navigate in arbitrary virtual VRML worlds [link]Paper  doi  abstract   bibtex   
In this paper we present a multimodal interface for navigating in arbitrary virtual VRML worlds. Conventional haptic devices like keyboard, mouse, joystick and touchscreen can freely be combined with special Virtual-Reality hardware like spacemouse, data glove and position tracker. As a key feature, the system additionally provides intuitive input by command and natural speech utterances as well as dynamic head and hand gestures. The commuication of the interface components is based on the abstract formalism of a context-free grammar, allowing the representation of deviceindependent information. Taking into account the current system context, user interactions are combined in a semantic unification process and mapped on a model of the viewer’s functionality vocabulary. To integrate the continuous multimodal information stream we use a straight-forward rulebased approach and a new technique based on evolutionary algorithms. Our navigation interface has extensively been evaluated in usability studies, obtaining excellent results.
@inproceedings{althoff_using_2001,
	address = {Orlando, Florida},
	title = {Using multimodal interaction to navigate in arbitrary virtual {VRML} worlds},
	url = {http://portal.acm.org/citation.cfm?doid=971478.971494},
	doi = {10.1145/971478.971494},
	abstract = {In this paper we present a multimodal interface for navigating in arbitrary virtual VRML worlds. Conventional haptic devices like keyboard, mouse, joystick and touchscreen can freely be combined with special Virtual-Reality hardware like spacemouse, data glove and position tracker. As a key feature, the system additionally provides intuitive input by command and natural speech utterances as well as dynamic head and hand gestures. The commuication of the interface components is based on the abstract formalism of a context-free grammar, allowing the representation of deviceindependent information. Taking into account the current system context, user interactions are combined in a semantic unification process and mapped on a model of the viewer’s functionality vocabulary. To integrate the continuous multimodal information stream we use a straight-forward rulebased approach and a new technique based on evolutionary algorithms. Our navigation interface has extensively been evaluated in usability studies, obtaining excellent results.},
	language = {en},
	urldate = {2022-03-26},
	booktitle = {Proceedings of the 2001 workshop on {Percetive} user interfaces  - {PUI} '01},
	publisher = {ACM Press},
	author = {Althoff, Frank and McGlaun, Gregor and Schuller, Björn and Morguet, Peter and Lang, Manfred},
	year = {2001},
	pages = {1},
}

Downloads: 0