The munich LSTM-RNN approach to the MediaEval 2014 "Emotion in Music" Task. Coutinho, E., Weninger, F., Schuller, B., & Scherer, K., R. In CEUR Workshop Proceedings, volume 1263, 2014.
The munich LSTM-RNN approach to the MediaEval 2014 "Emotion in Music" Task [pdf]Paper  The munich LSTM-RNN approach to the MediaEval 2014 "Emotion in Music" Task [link]Website  abstract   bibtex   1 download  
In this paper we describe TUM's approach for the MediaEval's \Emotion in Music" task. The goal of this task is to automatically estimate the emotions expressed by music (in terms of Arousal and Valence) in a time-continuous fashion. Our system consists of Long-Short Term Memory Recurrent Neural Networks (LSTM-RNN) for dynamic Arousal and Valence regression. We used two di erent sets of acoustic and psychoacoustic features that have been previously proven as e ective for emotion prediction in music and speech. The best model yielded an average Pearson's correlation coe-cient of 0.354 (Arousal) and 0.198 (Valence), and an average Root Mean Squared Error of 0.102 (Arousal) and 0.079 (Valence).

Downloads: 1