FARMI: A Framework for Recording Multi-Modal Interactions. Jonell, P., Mattias, B., Per, F., Kontogiorgos, D., David Aguas Lopes, J., Malisz, Z., Samuel, M., Oertel, C., Eran, R., & Shore, T. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018) :, pages 3969-3974, 2018.
abstract   bibtex   
In this paper we present (1) a processing architecture used to collect multi-modal sensor data, both for corpora collection and real-time processing, (2) an open-source implementation thereof and (3) a use-case where we deploy the architecture in a multi-party deception game, featuring six human players and one robot. The architecture is agnostic to the choice of hardware (e.g. microphones, cameras, etc.) and programming languages, although our implementation is mostly written in Python. In our use-case, different methods of capturing verbal and non-verbal cues from the participants were used. These were processed in real-time and used to inform the robot about the participants’ deceptive behaviour. The framework is of particular interest for researchers who are interested in the collection of multi-party, richly recorded corpora and the design of conversational systems. Moreover for researchers who are interested in human-robot interaction the available modules offer the possibility to easily create both autonomous and wizard-of-Oz interactions.
@inproceedings{
 title = {FARMI: A Framework for Recording Multi-Modal Interactions},
 type = {inproceedings},
 year = {2018},
 pages = {3969-3974},
 institution = {Multimodal Computing and Interaction, Saarland University, Germany},
 id = {02d9c304-9a36-3a9e-aa94-b0ed8ee4a78f},
 created = {2020-01-06T19:15:01.624Z},
 file_attached = {false},
 profile_id = {fb8d345a-1d79-3791-a6c6-00233ea44521},
 last_modified = {2020-01-06T19:15:01.624Z},
 read = {false},
 starred = {false},
 authored = {true},
 confirmed = {true},
 hidden = {false},
 citation_key = {Jonell1217276},
 source_type = {inproceedings},
 notes = {QC 20180618},
 private_publication = {false},
 abstract = {In this paper we present (1) a processing architecture used to collect multi-modal sensor data, both for corpora collection and real-time processing, (2) an open-source implementation thereof and (3) a use-case where we deploy the architecture in a multi-party deception game, featuring six human players and one robot. The architecture is agnostic to the choice of hardware (e.g. microphones, cameras, etc.) and programming languages, although our implementation is mostly written in Python. In our use-case, different methods of capturing verbal and non-verbal cues from the participants were used. These were processed in real-time and used to inform the robot about the participants’ deceptive behaviour. The framework is of particular interest for researchers who are interested in the collection of multi-party, richly recorded corpora and the design of conversational systems. Moreover for researchers who are interested in human-robot interaction the available modules offer the possibility to easily create both autonomous and wizard-of-Oz interactions. },
 bibtype = {inproceedings},
 author = {Jonell, Patrik and Mattias, Bystedt and Per, Fallgren and Kontogiorgos, Dimosthenis and David Aguas Lopes, Jos\’e; and Malisz, Zofia and Samuel, Mascarenhas and Oertel, Catharine and Eran, Raveh and Shore, Todd},
 booktitle = {Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018) :}
}

Downloads: 0