Transfer learning emotion manifestation across music and speech. Coutinho, E., Deng, J., & Schuller, B. In Proceedings of the International Joint Conference on Neural Networks, pages 3592-3598, 7, 2014. IEEE.
Transfer learning emotion manifestation across music and speech [link]Website  doi  abstract   bibtex   
In this article, we focus on time-continuous predictions of emotion in music and speech, and the transfer of learning from one domain to the other. First, we compare the use of Recurrent Neural Networks (RNN) with standard hidden units (Simple Recurrent Network SRN) and Long-Short Term Memory (LSTM) blocks for intra-domain acoustic emotion recognition. We show that LSTM networks outperform SRN, and we explain, in average, 74%/59% (music) and 42%/29% (speech) of the variance in Arousal/Valence. Next, we evaluate whether cross-domain predictions of emotion are a viable option for acoustic emotion recognition, and we test the use of Transfer Learning (TL) for feature space adaptation. In average, our models are able to explain 70%/43% (music) and 28%/ll% (speech) of the variance in Arousal/Valence. Overall, results indicate a good cross-domain generalization performance, particularly for the model trained on speech and tested on music without pre-encoding of the input features. To our best knowledge, this is the first demonstration of cross-modal time-continuous predictions of emotion in the acoustic domain.
@inproceedings{
 title = {Transfer learning emotion manifestation across music and speech},
 type = {inproceedings},
 year = {2014},
 keywords = {article,conference},
 pages = {3592-3598},
 websites = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6889814},
 month = {7},
 publisher = {IEEE},
 id = {dec34897-c5c4-365c-97fd-741efea93123},
 created = {2020-05-29T11:51:38.964Z},
 file_attached = {true},
 profile_id = {ffa9027c-806a-3827-93a1-02c42eb146a1},
 last_modified = {2023-05-15T08:14:21.832Z},
 read = {false},
 starred = {false},
 authored = {true},
 confirmed = {true},
 hidden = {false},
 citation_key = {coutinho2014transferspeech},
 source_type = {inproceedings},
 folder_uuids = {aac08d0d-38e7-4f4e-a381-5271c5c099ce},
 private_publication = {false},
 abstract = {In this article, we focus on time-continuous predictions of emotion in music and speech, and the transfer of learning from one domain to the other. First, we compare the use of Recurrent Neural Networks (RNN) with standard hidden units (Simple Recurrent Network SRN) and Long-Short Term Memory (LSTM) blocks for intra-domain acoustic emotion recognition. We show that LSTM networks outperform SRN, and we explain, in average, 74%/59% (music) and 42%/29% (speech) of the variance in Arousal/Valence. Next, we evaluate whether cross-domain predictions of emotion are a viable option for acoustic emotion recognition, and we test the use of Transfer Learning (TL) for feature space adaptation. In average, our models are able to explain 70%/43% (music) and 28%/ll% (speech) of the variance in Arousal/Valence. Overall, results indicate a good cross-domain generalization performance, particularly for the model trained on speech and tested on music without pre-encoding of the input features. To our best knowledge, this is the first demonstration of cross-modal time-continuous predictions of emotion in the acoustic domain.},
 bibtype = {inproceedings},
 author = {Coutinho, Eduardo and Deng, Jun and Schuller, Bjorn},
 doi = {10.1109/IJCNN.2014.6889814},
 booktitle = {Proceedings of the International Joint Conference on Neural Networks}
}

Downloads: 0