Transfer learning emotion manifestation across music and speech. Coutinho, E., Deng, J., & Schuller, B. In Proceedings of the International Joint Conference on Neural Networks, pages 3592-3598, 7, 2014. IEEE. Paper Website doi abstract bibtex In this article, we focus on time-continuous predictions of emotion in music and speech, and the transfer of learning from one domain to the other. First, we compare the use of Recurrent Neural Networks (RNN) with standard hidden units (Simple Recurrent Network SRN) and Long-Short Term Memory (LSTM) blocks for intra-domain acoustic emotion recognition. We show that LSTM networks outperform SRN, and we explain, in average, 74%/59% (music) and 42%/29% (speech) of the variance in Arousal/Valence. Next, we evaluate whether cross-domain predictions of emotion are a viable option for acoustic emotion recognition, and we test the use of Transfer Learning (TL) for feature space adaptation. In average, our models are able to explain 70%/43% (music) and 28%/ll% (speech) of the variance in Arousal/Valence. Overall, results indicate a good cross-domain generalization performance, particularly for the model trained on speech and tested on music without pre-encoding of the input features. To our best knowledge, this is the first demonstration of cross-modal time-continuous predictions of emotion in the acoustic domain.
@inproceedings{
title = {Transfer learning emotion manifestation across music and speech},
type = {inproceedings},
year = {2014},
keywords = {article,conference},
pages = {3592-3598},
websites = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6889814},
month = {7},
publisher = {IEEE},
id = {8c3325c9-29d1-3b26-a3a7-2827243dee67},
created = {2024-08-09T12:19:58.958Z},
file_attached = {true},
profile_id = {ffa9027c-806a-3827-93a1-02c42eb146a1},
group_id = {da2a8249-fdf4-3036-ba56-7358198a1600},
last_modified = {2024-08-09T12:20:36.007Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
citation_key = {coutinho2014transferspeech},
source_type = {inproceedings},
private_publication = {false},
abstract = {In this article, we focus on time-continuous predictions of emotion in music and speech, and the transfer of learning from one domain to the other. First, we compare the use of Recurrent Neural Networks (RNN) with standard hidden units (Simple Recurrent Network SRN) and Long-Short Term Memory (LSTM) blocks for intra-domain acoustic emotion recognition. We show that LSTM networks outperform SRN, and we explain, in average, 74%/59% (music) and 42%/29% (speech) of the variance in Arousal/Valence. Next, we evaluate whether cross-domain predictions of emotion are a viable option for acoustic emotion recognition, and we test the use of Transfer Learning (TL) for feature space adaptation. In average, our models are able to explain 70%/43% (music) and 28%/ll% (speech) of the variance in Arousal/Valence. Overall, results indicate a good cross-domain generalization performance, particularly for the model trained on speech and tested on music without pre-encoding of the input features. To our best knowledge, this is the first demonstration of cross-modal time-continuous predictions of emotion in the acoustic domain.},
bibtype = {inproceedings},
author = {Coutinho, Eduardo and Deng, Jun and Schuller, Bjorn},
doi = {10.1109/IJCNN.2014.6889814},
booktitle = {Proceedings of the International Joint Conference on Neural Networks}
}
Downloads: 0
{"_id":"C3ejrmSzuX4BnA3Qj","bibbaseid":"coutinho-deng-schuller-transferlearningemotionmanifestationacrossmusicandspeech-2014","downloads":0,"creationDate":"2018-03-24T12:43:42.820Z","title":"Transfer learning emotion manifestation across music and speech","author_short":["Coutinho, E.","Deng, J.","Schuller, B."],"year":2014,"bibtype":"inproceedings","biburl":"https://bibbase.org/service/mendeley/ffa9027c-806a-3827-93a1-02c42eb146a1","bibdata":{"title":"Transfer learning emotion manifestation across music and speech","type":"inproceedings","year":"2014","keywords":"article,conference","pages":"3592-3598","websites":"http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6889814","month":"7","publisher":"IEEE","id":"8c3325c9-29d1-3b26-a3a7-2827243dee67","created":"2024-08-09T12:19:58.958Z","file_attached":"true","profile_id":"ffa9027c-806a-3827-93a1-02c42eb146a1","group_id":"da2a8249-fdf4-3036-ba56-7358198a1600","last_modified":"2024-08-09T12:20:36.007Z","read":false,"starred":false,"authored":false,"confirmed":"true","hidden":false,"citation_key":"coutinho2014transferspeech","source_type":"inproceedings","private_publication":false,"abstract":"In this article, we focus on time-continuous predictions of emotion in music and speech, and the transfer of learning from one domain to the other. First, we compare the use of Recurrent Neural Networks (RNN) with standard hidden units (Simple Recurrent Network SRN) and Long-Short Term Memory (LSTM) blocks for intra-domain acoustic emotion recognition. We show that LSTM networks outperform SRN, and we explain, in average, 74%/59% (music) and 42%/29% (speech) of the variance in Arousal/Valence. Next, we evaluate whether cross-domain predictions of emotion are a viable option for acoustic emotion recognition, and we test the use of Transfer Learning (TL) for feature space adaptation. In average, our models are able to explain 70%/43% (music) and 28%/ll% (speech) of the variance in Arousal/Valence. Overall, results indicate a good cross-domain generalization performance, particularly for the model trained on speech and tested on music without pre-encoding of the input features. To our best knowledge, this is the first demonstration of cross-modal time-continuous predictions of emotion in the acoustic domain.","bibtype":"inproceedings","author":"Coutinho, Eduardo and Deng, Jun and Schuller, Bjorn","doi":"10.1109/IJCNN.2014.6889814","booktitle":"Proceedings of the International Joint Conference on Neural Networks","bibtex":"@inproceedings{\n title = {Transfer learning emotion manifestation across music and speech},\n type = {inproceedings},\n year = {2014},\n keywords = {article,conference},\n pages = {3592-3598},\n websites = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6889814},\n month = {7},\n publisher = {IEEE},\n id = {8c3325c9-29d1-3b26-a3a7-2827243dee67},\n created = {2024-08-09T12:19:58.958Z},\n file_attached = {true},\n profile_id = {ffa9027c-806a-3827-93a1-02c42eb146a1},\n group_id = {da2a8249-fdf4-3036-ba56-7358198a1600},\n last_modified = {2024-08-09T12:20:36.007Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n citation_key = {coutinho2014transferspeech},\n source_type = {inproceedings},\n private_publication = {false},\n abstract = {In this article, we focus on time-continuous predictions of emotion in music and speech, and the transfer of learning from one domain to the other. First, we compare the use of Recurrent Neural Networks (RNN) with standard hidden units (Simple Recurrent Network SRN) and Long-Short Term Memory (LSTM) blocks for intra-domain acoustic emotion recognition. We show that LSTM networks outperform SRN, and we explain, in average, 74%/59% (music) and 42%/29% (speech) of the variance in Arousal/Valence. Next, we evaluate whether cross-domain predictions of emotion are a viable option for acoustic emotion recognition, and we test the use of Transfer Learning (TL) for feature space adaptation. In average, our models are able to explain 70%/43% (music) and 28%/ll% (speech) of the variance in Arousal/Valence. Overall, results indicate a good cross-domain generalization performance, particularly for the model trained on speech and tested on music without pre-encoding of the input features. To our best knowledge, this is the first demonstration of cross-modal time-continuous predictions of emotion in the acoustic domain.},\n bibtype = {inproceedings},\n author = {Coutinho, Eduardo and Deng, Jun and Schuller, Bjorn},\n doi = {10.1109/IJCNN.2014.6889814},\n booktitle = {Proceedings of the International Joint Conference on Neural Networks}\n}","author_short":["Coutinho, E.","Deng, J.","Schuller, B."],"urls":{"Paper":"https://bibbase.org/service/mendeley/ffa9027c-806a-3827-93a1-02c42eb146a1/file/6c41905a-2f0a-a067-4e23-6688f55da617/2014___Coutinho_Deng_Schuller___Transfer_learning_emotion_manifestation_across_music_and_speech.pdf.pdf","Website":"http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6889814"},"biburl":"https://bibbase.org/service/mendeley/ffa9027c-806a-3827-93a1-02c42eb146a1","bibbaseid":"coutinho-deng-schuller-transferlearningemotionmanifestationacrossmusicandspeech-2014","role":"author","keyword":["article","conference"],"metadata":{"authorlinks":{"coutinho, e":"https://bibbase.org/service/mendeley/ffa9027c-806a-3827-93a1-02c42eb146a1"}},"downloads":0},"search_terms":["transfer","learning","emotion","manifestation","music","speech","coutinho","deng","schuller"],"keywords":["article","conference"],"authorIDs":["mo4CFXJ7ukAMT9nho"],"dataSources":["Tcd3cXtdQsiKHPZsW","YqW8pMoihb7JazZcx","ya2CyA73rpZseyrZ8","2252seNhipfTmjEBQ"]}