A data-driven approach to mid-level perceptual musical feature modeling. Aljanaki, A. & Soleymani, M. In Proceedings of the 19th International Society for Music Information Retrieval Conference, Paris, France, September, 2018. arXiv. Paper abstract bibtex Musical features and descriptors could be coarsely divided into three levels of complexity. The bottom level contains the basic building blocks of music, e.g., chords, beats and timbre. The middle level contains concepts that emerge from combining the basic blocks: tonal and rhythmic stability, harmonic and rhythmic complexity, etc. High-level descriptors (genre, mood, expressive style) are usually modeled using the lower level ones. The features belonging to the middle level can both improve automatic recognition of high-level descriptors, and provide new music retrieval possibilities. Mid-level features are subjective and usually lack clear definitions. However, they are very important for human perception of music, and on some of them people can reach high agreement, even though defining them and therefore, designing a hand-crafted feature extractor for them can be difficult. In this paper, we derive the mid-level descriptors from data. We collect and release a dataset\textbackslashtextbackslashfootnote\https://osf.io/5aupt/\ of 5000 songs annotated by musicians with seven mid-level descriptors, namely, melodiousness, tonal and rhythmic stability, modality, rhythmic complexity, dissonance and articulation. We then compare several approaches to predicting these descriptors from spectrograms using deep-learning. We also demonstrate the usefulness of these mid-level features using music emotion recognition as an application.
@inproceedings{aljanaki_data-driven_2018,
address = {Paris, France},
title = {A data-driven approach to mid-level perceptual musical feature modeling},
url = {https://arxiv.org/abs/1806.04903},
abstract = {Musical features and descriptors could be coarsely divided into three levels of complexity. The bottom level contains the basic building blocks of music, e.g., chords, beats and timbre. The middle level contains concepts that emerge from combining the basic blocks: tonal and rhythmic stability, harmonic and rhythmic complexity, etc. High-level descriptors (genre, mood, expressive style) are usually modeled using the lower level ones. The features belonging to the middle level can both improve automatic recognition of high-level descriptors, and provide new music retrieval possibilities. Mid-level features are subjective and usually lack clear definitions. However, they are very important for human perception of music, and on some of them people can reach high agreement, even though defining them and therefore, designing a hand-crafted feature extractor for them can be difficult. In this paper, we derive the mid-level descriptors from data. We collect and release a dataset{\textbackslash}textbackslashfootnote\{https://osf.io/5aupt/\} of 5000 songs annotated by musicians with seven mid-level descriptors, namely, melodiousness, tonal and rhythmic stability, modality, rhythmic complexity, dissonance and articulation. We then compare several approaches to predicting these descriptors from spectrograms using deep-learning. We also demonstrate the usefulness of these mid-level features using music emotion recognition as an application.},
booktitle = {Proceedings of the 19th {International} {Society} for {Music} {Information} {Retrieval} {Conference}},
publisher = {arXiv},
author = {Aljanaki, Anna and Soleymani, Mohammad},
month = sep,
year = {2018},
keywords = {Virtual Humans},
}
Downloads: 0
{"_id":"YXkncPSLH85wnzREP","bibbaseid":"aljanaki-soleymani-adatadrivenapproachtomidlevelperceptualmusicalfeaturemodeling-2018","author_short":["Aljanaki, A.","Soleymani, M."],"bibdata":{"bibtype":"inproceedings","type":"inproceedings","address":"Paris, France","title":"A data-driven approach to mid-level perceptual musical feature modeling","url":"https://arxiv.org/abs/1806.04903","abstract":"Musical features and descriptors could be coarsely divided into three levels of complexity. The bottom level contains the basic building blocks of music, e.g., chords, beats and timbre. The middle level contains concepts that emerge from combining the basic blocks: tonal and rhythmic stability, harmonic and rhythmic complexity, etc. High-level descriptors (genre, mood, expressive style) are usually modeled using the lower level ones. The features belonging to the middle level can both improve automatic recognition of high-level descriptors, and provide new music retrieval possibilities. Mid-level features are subjective and usually lack clear definitions. However, they are very important for human perception of music, and on some of them people can reach high agreement, even though defining them and therefore, designing a hand-crafted feature extractor for them can be difficult. In this paper, we derive the mid-level descriptors from data. We collect and release a dataset\\textbackslashtextbackslashfootnote\\https://osf.io/5aupt/\\ of 5000 songs annotated by musicians with seven mid-level descriptors, namely, melodiousness, tonal and rhythmic stability, modality, rhythmic complexity, dissonance and articulation. We then compare several approaches to predicting these descriptors from spectrograms using deep-learning. We also demonstrate the usefulness of these mid-level features using music emotion recognition as an application.","booktitle":"Proceedings of the 19th International Society for Music Information Retrieval Conference","publisher":"arXiv","author":[{"propositions":[],"lastnames":["Aljanaki"],"firstnames":["Anna"],"suffixes":[]},{"propositions":[],"lastnames":["Soleymani"],"firstnames":["Mohammad"],"suffixes":[]}],"month":"September","year":"2018","keywords":"Virtual Humans","bibtex":"@inproceedings{aljanaki_data-driven_2018,\n\taddress = {Paris, France},\n\ttitle = {A data-driven approach to mid-level perceptual musical feature modeling},\n\turl = {https://arxiv.org/abs/1806.04903},\n\tabstract = {Musical features and descriptors could be coarsely divided into three levels of complexity. The bottom level contains the basic building blocks of music, e.g., chords, beats and timbre. The middle level contains concepts that emerge from combining the basic blocks: tonal and rhythmic stability, harmonic and rhythmic complexity, etc. High-level descriptors (genre, mood, expressive style) are usually modeled using the lower level ones. The features belonging to the middle level can both improve automatic recognition of high-level descriptors, and provide new music retrieval possibilities. Mid-level features are subjective and usually lack clear definitions. However, they are very important for human perception of music, and on some of them people can reach high agreement, even though defining them and therefore, designing a hand-crafted feature extractor for them can be difficult. In this paper, we derive the mid-level descriptors from data. We collect and release a dataset{\\textbackslash}textbackslashfootnote\\{https://osf.io/5aupt/\\} of 5000 songs annotated by musicians with seven mid-level descriptors, namely, melodiousness, tonal and rhythmic stability, modality, rhythmic complexity, dissonance and articulation. We then compare several approaches to predicting these descriptors from spectrograms using deep-learning. We also demonstrate the usefulness of these mid-level features using music emotion recognition as an application.},\n\tbooktitle = {Proceedings of the 19th {International} {Society} for {Music} {Information} {Retrieval} {Conference}},\n\tpublisher = {arXiv},\n\tauthor = {Aljanaki, Anna and Soleymani, Mohammad},\n\tmonth = sep,\n\tyear = {2018},\n\tkeywords = {Virtual Humans},\n}\n\n","author_short":["Aljanaki, A.","Soleymani, M."],"key":"aljanaki_data-driven_2018","id":"aljanaki_data-driven_2018","bibbaseid":"aljanaki-soleymani-adatadrivenapproachtomidlevelperceptualmusicalfeaturemodeling-2018","role":"author","urls":{"Paper":"https://arxiv.org/abs/1806.04903"},"keyword":["Virtual Humans"],"metadata":{"authorlinks":{}},"html":""},"bibtype":"inproceedings","biburl":"https://api.zotero.org/users/6976806/collections/GUMH2QKL/items?key=ipCS99jY9KwOteQbpfAW7VKn&format=bibtex&limit=100","dataSources":["Z4B8L2qnYQgDdZhbe","jjKsXjebLR7xyJojc"],"keywords":["virtual humans"],"search_terms":["data","driven","approach","mid","level","perceptual","musical","feature","modeling","aljanaki","soleymani"],"title":"A data-driven approach to mid-level perceptual musical feature modeling","year":2018}