The JESTKOD database: an affective multimodal database of dyadic interactions. Bozkurt, E., Khaki, H., Kececi, S., Turker, B. B., Yemez, Y., & Erzin, E. LANGUAGE RESOURCES AND EVALUATION, 51(3):857-872, SEP, 2017.
doi  abstract   bibtex   
In human-to-human communication, gesture and speech co-exist in time with a tight synchrony, and gestures are often utilized to complement or to emphasize speech. In human-computer interaction systems, natural, affective and believable use of gestures would be a valuable key component in adopting and emphasizing human-centered aspects. However, natural and affective multimodal data, for studying computational models of gesture and speech, is limited. In this study, we introduce the JESTKOD database, which consists of speech and full-body motion capture data recordings in dyadic interaction setting under agreement and disagreement scenarios. Participants of the dyadic interactions are native Turkish speakers and recordings of each participant are rated in dimensional affect space. We present our multimodal data collection and annotation process, as well as our preliminary experimental studies on agreement/disagreement classification of dyadic interactions using body gesture and speech data. The JESTKOD database provides a valuable asset to investigate gesture and speech towards designing more natural and affective human-computer interaction systems.
@article{ ISI:000407360600011,
Author = {Bozkurt, Elif and Khaki, Hossein and Kececi, Sinan and Turker, B. Berker
   and Yemez, Yucel and Erzin, Engin},
Title = {{The JESTKOD database: an affective multimodal database of dyadic
   interactions}},
Journal = {{LANGUAGE RESOURCES AND EVALUATION}},
Year = {{2017}},
Volume = {{51}},
Number = {{3}},
Pages = {{857-872}},
Month = {{SEP}},
Abstract = {{In human-to-human communication, gesture and speech co-exist in time
   with a tight synchrony, and gestures are often utilized to complement or
   to emphasize speech. In human-computer interaction systems, natural,
   affective and believable use of gestures would be a valuable key
   component in adopting and emphasizing human-centered aspects. However,
   natural and affective multimodal data, for studying computational models
   of gesture and speech, is limited. In this study, we introduce the
   JESTKOD database, which consists of speech and full-body motion capture
   data recordings in dyadic interaction setting under agreement and
   disagreement scenarios. Participants of the dyadic interactions are
   native Turkish speakers and recordings of each participant are rated in
   dimensional affect space. We present our multimodal data collection and
   annotation process, as well as our preliminary experimental studies on
   agreement/disagreement classification of dyadic interactions using body
   gesture and speech data. The JESTKOD database provides a valuable asset
   to investigate gesture and speech towards designing more natural and
   affective human-computer interaction systems.}},
DOI = {{10.1007/s10579-016-9377-0}},
ISSN = {{1574-020X}},
EISSN = {{1574-0218}},
Unique-ID = {{ISI:000407360600011}},
}

Downloads: 0