Population Codes Representing Musical Timbre for High-Level fMRI Categorization of Music Genres. Casey, M., Thompson, J., Kang, O., Raizada, R., & Wheatley, T. In Langs, G., Rish, I., Grosse-Wentrup, M., & Murphy, B., editors, Machine Learning and Interpretation in Neuroimaging, of Lecture Notes in Computer Science, pages 34–41, Berlin, Heidelberg, 2012. Springer.
doi  abstract   bibtex   
We present experimental evidence in support of distributed neural codes for timbre that are implicated in discrimination of musical styles. We used functional magnetic resonance imaging (fMRI) in humans and multivariate pattern analysis (MVPA) to identify activation patterns that encode the perception of rich music audio stimuli from five different musical styles. We show that musical styles can be automatically classified from population codes in bilateral superior temporal sulcus (STS). To investigate the possible link between the acoustic features of the auditory stimuli and neural population codes in STS, we conducted a representational similarity analysis and a multivariate regression-retrieval task. We found that the similarity structure of timbral features of our stimuli resembled the similarity structure of the STS more than any other type of acoustic feature. We also found that a regression model trained on timbral features outperformed models trained on other types of audio features. Our results show that human brain responses to complex, natural music can be differentiated by timbral audio features, emphasizing the importance of timbre in auditory perception.
@inproceedings{casey_population_2012,
	address = {Berlin, Heidelberg},
	series = {Lecture {Notes} in {Computer} {Science}},
	title = {Population {Codes} {Representing} {Musical} {Timbre} for {High}-{Level} {fMRI} {Categorization} of {Music} {Genres}},
	isbn = {978-3-642-34713-9},
	doi = {10.1007/978-3-642-34713-9_5},
	abstract = {We present experimental evidence in support of distributed neural codes for timbre that are implicated in discrimination of musical styles. We used functional magnetic resonance imaging (fMRI) in humans and multivariate pattern analysis (MVPA) to identify activation patterns that encode the perception of rich music audio stimuli from five different musical styles. We show that musical styles can be automatically classified from population codes in bilateral superior temporal sulcus (STS). To investigate the possible link between the acoustic features of the auditory stimuli and neural population codes in STS, we conducted a representational similarity analysis and a multivariate regression-retrieval task. We found that the similarity structure of timbral features of our stimuli resembled the similarity structure of the STS more than any other type of acoustic feature. We also found that a regression model trained on timbral features outperformed models trained on other types of audio features. Our results show that human brain responses to complex, natural music can be differentiated by timbral audio features, emphasizing the importance of timbre in auditory perception.},
	language = {en},
	booktitle = {Machine {Learning} and {Interpretation} in {Neuroimaging}},
	publisher = {Springer},
	author = {Casey, Michael and Thompson, Jessica and Kang, Olivia and Raizada, Rajeev and Wheatley, Thalia},
	editor = {Langs, Georg and Rish, Irina and Grosse-Wentrup, Moritz and Murphy, Brian},
	year = {2012},
	keywords = {STS, cepstrum, multivariate analysis, music, timbre code},
	pages = {34--41},
}

Downloads: 0