Shared acoustic codes underlie emotional communication in music and speech—evidence from deep transfer learning. Coutinho, E. & Schuller, B. PLoS ONE, 12(6):e0179289, Public Library of Science (PLoS), 6, 2017. Paper Website doi abstract bibtex Music and speech exhibit striking similarities in the communication of emotions in the acoustic domain, in such a way that the communication of specific emotions is achieved, at least to a certain extent, by means of shared acoustic patterns. From an Affective Sciences points of view, determining the degree of overlap between both domains is fundamental to understand the shared mechanisms underlying such phenomenon. From a Machine learning perspective, the overlap between acoustic codes for emotional expression in music and speech opens new possibilities to enlarge the amount of data available to develop music and speech emotion recognition systems. In this article, we investigate time-continuous predictions of emotion (Arousal and Valence) in music and speech, and the Transfer Learning between these domains. We establish a comparative framework including intra- (i.e., models trained and tested on the same modality, either music or speech) and cross-domain experiments (i.e., models trained in one modality and tested on the other). In the cross-domain context, we evaluated two strategies—the direct transfer between domains, and the contribution of Transfer Learning techniques (feature-representation-transfer based on Denoising Auto Encoders) for reducing the gap in the feature space distributions. Our results demonstrate an excellent cross-domain generalisation performance with and without feature representation transfer in both directions. In the case of music, cross-domain approaches outperformed intra-domain models for Valence estimation, whereas for Speech intra-domain models achieve the best performance. This is the first demonstration of shared acoustic codes for emotional expression in music and speech in the time-continuous domain.
@article{
title = {Shared acoustic codes underlie emotional communication in music and speech—evidence from deep transfer learning},
type = {article},
year = {2017},
keywords = {article,journal},
pages = {e0179289},
volume = {12},
websites = {http://dx.plos.org/10.1371/journal.pone.0179289,http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2%5C&SrcApp=PARTNER_APP%5C&SrcAuth=LinksAMR%5C&KeyUT=WOS:000404607900019%5C&DestLinkType=FullRecord%5C&DestApp=ALL_WOS%5C&UsrCustomerID=f3ec48d},
month = {6},
publisher = {Public Library of Science (PLoS)},
day = {28},
id = {4215ae8c-cef1-3482-833f-b4c6e995930f},
created = {2024-08-09T12:19:56.921Z},
file_attached = {true},
profile_id = {ffa9027c-806a-3827-93a1-02c42eb146a1},
group_id = {da2a8249-fdf4-3036-ba56-7358198a1600},
last_modified = {2024-08-09T12:20:54.242Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
citation_key = {coutinho2017sharedlearning},
source_type = {JOUR},
private_publication = {false},
abstract = {Music and speech exhibit striking similarities in the communication of emotions in the acoustic domain, in such a way that the communication of specific emotions is achieved, at least to a certain extent, by means of shared acoustic patterns. From an Affective Sciences points of view, determining the degree of overlap between both domains is fundamental to understand the shared mechanisms underlying such phenomenon. From a Machine learning perspective, the overlap between acoustic codes for emotional expression in music and speech opens new possibilities to enlarge the amount of data available to develop music and speech emotion recognition systems. In this article, we investigate time-continuous predictions of emotion (Arousal and Valence) in music and speech, and the Transfer Learning between these domains. We establish a comparative framework including intra- (i.e., models trained and tested on the same modality, either music or speech) and cross-domain experiments (i.e., models trained in one modality and tested on the other). In the cross-domain context, we evaluated two strategies—the direct transfer between domains, and the contribution of Transfer Learning techniques (feature-representation-transfer based on Denoising Auto Encoders) for reducing the gap in the feature space distributions. Our results demonstrate an excellent cross-domain generalisation performance with and without feature representation transfer in both directions. In the case of music, cross-domain approaches outperformed intra-domain models for Valence estimation, whereas for Speech intra-domain models achieve the best performance. This is the first demonstration of shared acoustic codes for emotional expression in music and speech in the time-continuous domain.},
bibtype = {article},
author = {Coutinho, Eduardo and Schuller, Björn},
editor = {Zhang, Yudong},
doi = {10.1371/journal.pone.0179289},
journal = {PLoS ONE},
number = {6}
}
Downloads: 0
{"_id":"B3Ndr4AsgYrcb4GPq","bibbaseid":"coutinho-schuller-sharedacousticcodesunderlieemotionalcommunicationinmusicandspeechevidencefromdeeptransferlearning-2017","downloads":0,"creationDate":"2017-07-06T08:18:11.353Z","title":"Shared acoustic codes underlie emotional communication in music and speech—evidence from deep transfer learning","author_short":["Coutinho, E.","Schuller, B."],"year":2017,"bibtype":"article","biburl":"https://bibbase.org/service/mendeley/ffa9027c-806a-3827-93a1-02c42eb146a1","bibdata":{"title":"Shared acoustic codes underlie emotional communication in music and speech—evidence from deep transfer learning","type":"article","year":"2017","keywords":"article,journal","pages":"e0179289","volume":"12","websites":"http://dx.plos.org/10.1371/journal.pone.0179289,http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2%5C&SrcApp=PARTNER_APP%5C&SrcAuth=LinksAMR%5C&KeyUT=WOS:000404607900019%5C&DestLinkType=FullRecord%5C&DestApp=ALL_WOS%5C&UsrCustomerID=f3ec48d","month":"6","publisher":"Public Library of Science (PLoS)","day":"28","id":"4215ae8c-cef1-3482-833f-b4c6e995930f","created":"2024-08-09T12:19:56.921Z","file_attached":"true","profile_id":"ffa9027c-806a-3827-93a1-02c42eb146a1","group_id":"da2a8249-fdf4-3036-ba56-7358198a1600","last_modified":"2024-08-09T12:20:54.242Z","read":false,"starred":false,"authored":false,"confirmed":"true","hidden":false,"citation_key":"coutinho2017sharedlearning","source_type":"JOUR","private_publication":false,"abstract":"Music and speech exhibit striking similarities in the communication of emotions in the acoustic domain, in such a way that the communication of specific emotions is achieved, at least to a certain extent, by means of shared acoustic patterns. From an Affective Sciences points of view, determining the degree of overlap between both domains is fundamental to understand the shared mechanisms underlying such phenomenon. From a Machine learning perspective, the overlap between acoustic codes for emotional expression in music and speech opens new possibilities to enlarge the amount of data available to develop music and speech emotion recognition systems. In this article, we investigate time-continuous predictions of emotion (Arousal and Valence) in music and speech, and the Transfer Learning between these domains. We establish a comparative framework including intra- (i.e., models trained and tested on the same modality, either music or speech) and cross-domain experiments (i.e., models trained in one modality and tested on the other). In the cross-domain context, we evaluated two strategies—the direct transfer between domains, and the contribution of Transfer Learning techniques (feature-representation-transfer based on Denoising Auto Encoders) for reducing the gap in the feature space distributions. Our results demonstrate an excellent cross-domain generalisation performance with and without feature representation transfer in both directions. In the case of music, cross-domain approaches outperformed intra-domain models for Valence estimation, whereas for Speech intra-domain models achieve the best performance. This is the first demonstration of shared acoustic codes for emotional expression in music and speech in the time-continuous domain.","bibtype":"article","author":"Coutinho, Eduardo and Schuller, Björn","editor":"Zhang, Yudong","doi":"10.1371/journal.pone.0179289","journal":"PLoS ONE","number":"6","bibtex":"@article{\n title = {Shared acoustic codes underlie emotional communication in music and speech—evidence from deep transfer learning},\n type = {article},\n year = {2017},\n keywords = {article,journal},\n pages = {e0179289},\n volume = {12},\n websites = {http://dx.plos.org/10.1371/journal.pone.0179289,http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2%5C&SrcApp=PARTNER_APP%5C&SrcAuth=LinksAMR%5C&KeyUT=WOS:000404607900019%5C&DestLinkType=FullRecord%5C&DestApp=ALL_WOS%5C&UsrCustomerID=f3ec48d},\n month = {6},\n publisher = {Public Library of Science (PLoS)},\n day = {28},\n id = {4215ae8c-cef1-3482-833f-b4c6e995930f},\n created = {2024-08-09T12:19:56.921Z},\n file_attached = {true},\n profile_id = {ffa9027c-806a-3827-93a1-02c42eb146a1},\n group_id = {da2a8249-fdf4-3036-ba56-7358198a1600},\n last_modified = {2024-08-09T12:20:54.242Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n citation_key = {coutinho2017sharedlearning},\n source_type = {JOUR},\n private_publication = {false},\n abstract = {Music and speech exhibit striking similarities in the communication of emotions in the acoustic domain, in such a way that the communication of specific emotions is achieved, at least to a certain extent, by means of shared acoustic patterns. From an Affective Sciences points of view, determining the degree of overlap between both domains is fundamental to understand the shared mechanisms underlying such phenomenon. From a Machine learning perspective, the overlap between acoustic codes for emotional expression in music and speech opens new possibilities to enlarge the amount of data available to develop music and speech emotion recognition systems. In this article, we investigate time-continuous predictions of emotion (Arousal and Valence) in music and speech, and the Transfer Learning between these domains. We establish a comparative framework including intra- (i.e., models trained and tested on the same modality, either music or speech) and cross-domain experiments (i.e., models trained in one modality and tested on the other). In the cross-domain context, we evaluated two strategies—the direct transfer between domains, and the contribution of Transfer Learning techniques (feature-representation-transfer based on Denoising Auto Encoders) for reducing the gap in the feature space distributions. Our results demonstrate an excellent cross-domain generalisation performance with and without feature representation transfer in both directions. In the case of music, cross-domain approaches outperformed intra-domain models for Valence estimation, whereas for Speech intra-domain models achieve the best performance. This is the first demonstration of shared acoustic codes for emotional expression in music and speech in the time-continuous domain.},\n bibtype = {article},\n author = {Coutinho, Eduardo and Schuller, Björn},\n editor = {Zhang, Yudong},\n doi = {10.1371/journal.pone.0179289},\n journal = {PLoS ONE},\n number = {6}\n}","author_short":["Coutinho, E.","Schuller, B."],"editor_short":["Zhang, Y."],"urls":{"Paper":"https://bibbase.org/service/mendeley/ffa9027c-806a-3827-93a1-02c42eb146a1/file/317bb666-aff3-d95f-11e1-86546a2d7d13/2017___Coutinho_Schuller___Shared_acoustic_codes_underlie_emotional_communication_in_music_and_speechevidence_from_deep_.pdf.pdf","Website":"http://dx.plos.org/10.1371/journal.pone.0179289,http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2%5C&SrcApp=PARTNER_APP%5C&SrcAuth=LinksAMR%5C&KeyUT=WOS:000404607900019%5C&DestLinkType=FullRecord%5C&DestApp=ALL_WOS%5C&UsrCustomerID=f3ec48d"},"biburl":"https://bibbase.org/service/mendeley/ffa9027c-806a-3827-93a1-02c42eb146a1","bibbaseid":"coutinho-schuller-sharedacousticcodesunderlieemotionalcommunicationinmusicandspeechevidencefromdeeptransferlearning-2017","role":"author","keyword":["article","journal"],"metadata":{"authorlinks":{"coutinho, e":"https://bibbase.org/service/mendeley/ffa9027c-806a-3827-93a1-02c42eb146a1"}},"downloads":0},"search_terms":["shared","acoustic","codes","underlie","emotional","communication","music","speech","evidence","deep","transfer","learning","coutinho","schuller"],"keywords":["article","journal"],"authorIDs":["mo4CFXJ7ukAMT9nho"],"dataSources":["Tcd3cXtdQsiKHPZsW","YqW8pMoihb7JazZcx","ya2CyA73rpZseyrZ8","HEzruaK9ZuLGaJrsP","jDf6YRbdcAMnNSFh6","2252seNhipfTmjEBQ"]}