The multiple voices of musical emotions: source separation for improving music emotion recognition models and their interpretability. de Berardinis, J., Cangelosi, A., & Coutinho, E. In Cumming, J., Lee, J., H., McFee, B., Schedl, M., Devaney, J., McKay, C., Zangerle, E., & de Reuse, T., editors, Proceedings of the 21st International Society for Music Information Retrieval Conference, pages 310-217, 2020. International Society for Music Information Retrieval.
The multiple voices of musical emotions: source separation for improving music emotion recognition models and their interpretability [pdf]Paper  The multiple voices of musical emotions: source separation for improving music emotion recognition models and their interpretability [pdf]Website  abstract   bibtex   2 downloads  
Despite the manifold developments in music emotion recognition and related areas, estimating the emotional impact of music still poses many challenges. These are often associated to the complexity of the acoustic codes to emotion and the lack of large amounts of data with robust golden standards. In this paper, we propose a new computational model (EmoMucs) that considers the role of different musical voices in the prediction of the emotions induced by music. We combine source separation algorithms for breaking up music signals into independent song elements (vocals, bass, drums, other) and end-to-end state-of-the-art machine learning techniques for feature extraction and emotion modelling (valence and arousal regression). Through a series of computational experiments on a benchmark dataset using source-specialised models trained independently and different fusion strategies, we demonstrate that EmoMucs outperforms state-of-the-art approaches with the advantage of providing insights into the relative contribution of different musical elements to the emotions perceived by listeners.

Downloads: 2