Towards supporting multimodal and multiuser interactions in multimedia languages. Guedes, A. In Proceeding of the eighth ACM symposium on Document engineering - DocEng '16: Doctoral Consortium, 2016.
Towards supporting multimodal and multiuser interactions in multimedia languages [link]Paper  abstract   bibtex   3 downloads  
Multimedia languages—e.g. HTML, SMIL, and NCL (Nested Context Language)—are declarative programming languages focused on specifying multimedia applications using media and time abstractions. Traditionally, those languages focus on synchronizing a multimedia presentation and on supporting limited user interactions. Their support for media abstractions aim at graphical user interfaces (GUIs) by offering elements such as text (e.g. HTML’s \textlessp\textgreater, SMIL’s \textlesstext\textgreater), graphics (e.g. \textlessimg\textgreater), videos (e.g. HTML’s \textlessvideo\textgreater). Additionally, their support for user interactions also aims at GUIs, as they offer abstractions only for mouse (e.g. HTML’s onClick, NCL’s onSelection) and keyboard (e.g. HTML and SMIL’s keyPress, and NCL’s onKeySelection) recognitions. However, current advances in recognition technologies—e.g. speech, touch and gesture recognition—have given rise to a new class of multimodal user interfaces (MUIs) and the possibility of developing multiusers-aware multimedia systems. Throughout our research, we argue that multimedia language models should take advantage of those new possibilities, and we propose to extend existing multimedia languages with multimodal and multiuser abstractions.
@inproceedings{guedes_towards_2016,
	title = {Towards supporting multimodal and multiuser interactions in multimedia languages},
	rights = {All rights reserved},
	url = {https://doceng2016.cvl.tuwien.ac.at/?page_id=594},
	abstract = {Multimedia languages—e.g. {HTML}, {SMIL}, and {NCL} (Nested Context Language)—are declarative programming languages focused on specifying multimedia applications using media and time abstractions. Traditionally, those languages focus on synchronizing a multimedia presentation and on supporting limited user interactions. Their support for media abstractions aim at graphical user interfaces ({GUIs}) by offering elements such as text (e.g. {HTML}’s {\textless}p{\textgreater}, {SMIL}’s {\textless}text{\textgreater}), graphics (e.g. {\textless}img{\textgreater}), videos (e.g. {HTML}’s {\textless}video{\textgreater}). Additionally, their support for user interactions also aims at {GUIs}, as they offer abstractions only for mouse (e.g. {HTML}’s {onClick}, {NCL}’s {onSelection}) and keyboard (e.g. {HTML} and {SMIL}’s {keyPress}, and {NCL}’s {onKeySelection}) recognitions. However, current advances in recognition technologies—e.g. speech, touch and gesture recognition—have given rise to a new class of multimodal user interfaces ({MUIs}) and the possibility of developing multiusers-aware multimedia systems. Throughout our research, we argue that multimedia language models should take advantage of those new possibilities, and we propose to extend existing multimedia languages with multimodal and multiuser abstractions.},
	booktitle = {Proceeding of the eighth {ACM} symposium on Document engineering - {DocEng} '16: Doctoral Consortium},
	author = {Guedes, Alan},
	year = {2016},
}

Downloads: 3