Development of an auditory emotion recognition function using psychoacoustic parameters based on the International Affective Digitized Sounds. Choi, Y., Lee, S., Jung, S., Choi, I., Park, Y., & Kim, C. Behavior Research Methods, 47(4):1076–1084, December, 2015.
Development of an auditory emotion recognition function using psychoacoustic parameters based on the International Affective Digitized Sounds [link]Paper  doi  abstract   bibtex   
The purpose of this study was to develop an auditory emotion recognition function that could determine the emotion caused by sounds coming from the environment in our daily life. For this purpose, sound stimuli from the International Affective Digitized Sounds (IADS-2), a standardized database of sounds intended to evoke emotion, were selected, and four psychoacoustic parameters (i.e., loudness, sharpness, roughness, and fluctuation strength) were extracted from the sounds. Also, by using an emotion adjective scale, 140 college students were tested to measure three basic emotions (happiness, sadness, and negativity). From this discriminant analysis to predict basic emotions from the psychoacoustic parameters of sound, a discriminant function with overall discriminant accuracy of 88.9 % was produced from training data. In order to validate the discriminant function, the same four psychoacoustic parameters were extracted from 46 sound stimuli collected from another database and substituted into the discriminant function. The results showed that an overall discriminant accuracy of 63.04 % was confirmed. Our findings provide the possibility that daily-life sounds, beyond voice and music, can be used in a human–machine interface.
@article{choi_development_2015,
	title = {Development of an auditory emotion recognition function using psychoacoustic parameters based on the {International} {Affective} {Digitized} {Sounds}},
	volume = {47},
	issn = {1554-3528},
	url = {https://doi.org/10.3758/s13428-014-0525-4},
	doi = {10.3758/s13428-014-0525-4},
	abstract = {The purpose of this study was to develop an auditory emotion recognition function that could determine the emotion caused by sounds coming from the environment in our daily life. For this purpose, sound stimuli from the International Affective Digitized Sounds (IADS-2), a standardized database of sounds intended to evoke emotion, were selected, and four psychoacoustic parameters (i.e., loudness, sharpness, roughness, and fluctuation strength) were extracted from the sounds. Also, by using an emotion adjective scale, 140 college students were tested to measure three basic emotions (happiness, sadness, and negativity). From this discriminant analysis to predict basic emotions from the psychoacoustic parameters of sound, a discriminant function with overall discriminant accuracy of 88.9 \% was produced from training data. In order to validate the discriminant function, the same four psychoacoustic parameters were extracted from 46 sound stimuli collected from another database and substituted into the discriminant function. The results showed that an overall discriminant accuracy of 63.04 \% was confirmed. Our findings provide the possibility that daily-life sounds, beyond voice and music, can be used in a human–machine interface.},
	language = {en},
	number = {4},
	urldate = {2023-01-30},
	journal = {Behavior Research Methods},
	author = {Choi, Youngimm and Lee, Sungjun and Jung, SungSoo and Choi, In-Mook and Park, Yon-Kyu and Kim, Chobok},
	month = dec,
	year = {2015},
	keywords = {Auditory emotion recognition, Emotion recognition, Emotional adjectives, IADS-2, Psychoacoustic parameters},
	pages = {1076--1084},
}

Downloads: 0