Extending NCL to Support Multiuser and Multimodal Interactions. Guedes, Á. L., Azevedo, R. G. d. A., Colcher, S., & Barbosa, S. D. In Proceedings of the 22nd Brazilian Symposium on Multimedia and the Web, of Webmedia '16, pages 39–46, New York, NY, USA, 2016. ACM. Acceptance ratio: 30%
Extending NCL to Support Multiuser and Multimodal Interactions [link]Paper  doi  abstract   bibtex   35 downloads  
Recent advances in technologies for speech, touch and gesture recognition have given rise to a new class of user interfaces that does not only explore multiple modalities but also allows for multiple interacting users. Even so, current declarative multimedia languages e.g. HTML, SMIL, and NCL?support only limited forms of user input (mainly keyboard and mouse) for a single user. In this paper, we aim at studying how the NCL multimedia language could take advantage of those new recognition technologies. To do so, we revisit the model behind NCL, named NCM (Nested Context Model), and extend it with first-class concepts supporting multiuser and multimodal features. To evaluate our approach, we instantiate the proposal and discuss some usage scenarios, developed as NCL applications with our extended features.
@inproceedings{2016_11_guedes,
author={Guedes, \'{A}lan L.V. and Azevedo, Roberto Gerson de Albuquerque and
Colcher, S{\'e}rgio and Barbosa, Simone D.J.},
title={Extending NCL to Support Multiuser and Multimodal Interactions},
booktitle={Proceedings of the 22nd Brazilian Symposium on Multimedia and the
Web},
series={Webmedia '16},
year={2016},
isbn={978-1-4503-4512-5},
location={Teresina, Piauí; State, Brazil},
pages={39--46},
numpages={8},
url={http://doi.acm.org/10.1145/2976796.2976869},
doi={10.1145/2976796.2976869},
acmid={2976869},
publisher={ACM},
address={New York, NY, USA},
keywords={NCL, ginga-NCL, multimedia languages, multimodal interactions,
multiuser interactions, nested context language},
abstract={Recent advances in technologies for speech, touch and gesture
recognition have given rise to a new class of user interfaces that does not
only explore multiple modalities but also allows for multiple interacting
users. Even so, current declarative multimedia languages e.g. HTML, SMIL, and
NCL?support only limited forms of user input (mainly keyboard and mouse) for
a single user. In this paper, we aim at studying how the NCL multimedia
language could take advantage of those new recognition technologies. To do
so, we revisit the model behind NCL, named NCM (Nested Context Model), and
extend it with first-class concepts supporting multiuser and multimodal
features. To evaluate our approach, we instantiate the proposal and discuss
some usage scenarios, developed as NCL applications with our extended
features.},
note={Acceptance ratio: 30\%},
}

Downloads: 35