A Survey and Implementation of Performance Metrics for Self-Organized Maps. Forest, F., Lebbah, M., Azzag, H., & Lacaille, J. November 2020. arXiv:2011.05847 [cs]Link Paper Code abstract bibtex 4 downloads Self-Organizing Map algorithms have been used for almost 40 years across various application domains such as biology, geology, healthcare, industry and humanities as an interpretable tool to explore, cluster and visualize high-dimensional data sets. In every application, practitioners need to know whether they can trust the resulting mapping, and perform model selection to tune algorithm parameters (e.g. the map size). Quantitative evaluation of self-organizing maps (SOM) is a subset of clustering validation, which is a challenging problem as such. Clustering model selection is typically achieved by using clustering validity indices. While they also apply to self-organized clustering models, they ignore the topology of the map, only answering the question: do the SOM code vectors approximate well the data distribution? Evaluating SOM models brings in the additional challenge of assessing their topology: does the mapping preserve neighborhood relationships between the map and the original data? The problem of assessing the performance of SOM models has already been tackled quite thoroughly in literature, giving birth to a family of quality indices incorporating neighborhood constraints, called topographic indices. Commonly used examples of such metrics are the topographic error, neighborhood preservation or the topographic product. However, open-source implementations are almost impossible to find. This is the issue we try to solve in this work: after a survey of existing SOM performance metrics, we implemented them in Python and widely used numerical libraries, and provide them as an open-source library, SOMperf. This paper introduces each metric available in our module along with usage examples.
@unpublished{forest2020survey,
abstract = {Self-Organizing Map algorithms have been used for almost 40 years across various application domains such as biology, geology, healthcare, industry and humanities as an interpretable tool to explore, cluster and visualize high-dimensional data sets. In every application, practitioners need to know whether they can \textit{trust} the resulting mapping, and perform model selection to tune algorithm parameters (e.g. the map size). Quantitative evaluation of self-organizing maps (SOM) is a subset of clustering validation, which is a challenging problem as such. Clustering model selection is typically achieved by using clustering validity indices. While they also apply to self-organized clustering models, they ignore the topology of the map, only answering the question: do the SOM code vectors approximate well the data distribution? Evaluating SOM models brings in the additional challenge of assessing their topology: does the mapping preserve neighborhood relationships between the map and the original data? The problem of assessing the performance of SOM models has already been tackled quite thoroughly in literature, giving birth to a family of quality indices incorporating neighborhood constraints, called \textit{topographic} indices. Commonly used examples of such metrics are the topographic error, neighborhood preservation or the topographic product. However, open-source implementations are almost impossible to find. This is the issue we try to solve in this work: after a survey of existing SOM performance metrics, we implemented them in Python and widely used numerical libraries, and provide them as an open-source library, SOMperf. This paper introduces each metric available in our module along with usage examples.},
archivePrefix = {arXiv},
arxivId = {arXiv:2011.05847},
author = {Forest, Florent and Lebbah, Mustapha and Azzag, Hanane and Lacaille, J{\'{e}}r{\^{o}}me},
eprint = {arXiv:2011.05847},
note = {arXiv:2011.05847 [cs]},
title = {{A Survey and Implementation of Performance Metrics for Self-Organized Maps}},
year = {2020},
month = nov,
url_Link = {https://arxiv.org/abs/2011.05847},
url_Paper = {https://arxiv.org/pdf/2011.05847.pdf},
url_Code = {https://github.com/FlorentF9/SOMperf},
bibbase_note = {<img src="assets/img/papers/somperf.png">}
}
Downloads: 4
{"_id":"LL4BdW2AqttqZ2dLT","bibbaseid":"forest-lebbah-azzag-lacaille-asurveyandimplementationofperformancemetricsforselforganizedmaps-2020","authorIDs":["ehEra63koKn92TZxF"],"author_short":["Forest, F.","Lebbah, M.","Azzag, H.","Lacaille, J."],"bibdata":{"bibtype":"unpublished","type":"unpublished","abstract":"Self-Organizing Map algorithms have been used for almost 40 years across various application domains such as biology, geology, healthcare, industry and humanities as an interpretable tool to explore, cluster and visualize high-dimensional data sets. In every application, practitioners need to know whether they can <i>trust</i> the resulting mapping, and perform model selection to tune algorithm parameters (e.g. the map size). Quantitative evaluation of self-organizing maps (SOM) is a subset of clustering validation, which is a challenging problem as such. Clustering model selection is typically achieved by using clustering validity indices. While they also apply to self-organized clustering models, they ignore the topology of the map, only answering the question: do the SOM code vectors approximate well the data distribution? Evaluating SOM models brings in the additional challenge of assessing their topology: does the mapping preserve neighborhood relationships between the map and the original data? The problem of assessing the performance of SOM models has already been tackled quite thoroughly in literature, giving birth to a family of quality indices incorporating neighborhood constraints, called <i>topographic</i> indices. Commonly used examples of such metrics are the topographic error, neighborhood preservation or the topographic product. However, open-source implementations are almost impossible to find. This is the issue we try to solve in this work: after a survey of existing SOM performance metrics, we implemented them in Python and widely used numerical libraries, and provide them as an open-source library, SOMperf. This paper introduces each metric available in our module along with usage examples.","archiveprefix":"arXiv","arxivid":"arXiv:2011.05847","author":[{"propositions":[],"lastnames":["Forest"],"firstnames":["Florent"],"suffixes":[]},{"propositions":[],"lastnames":["Lebbah"],"firstnames":["Mustapha"],"suffixes":[]},{"propositions":[],"lastnames":["Azzag"],"firstnames":["Hanane"],"suffixes":[]},{"propositions":[],"lastnames":["Lacaille"],"firstnames":["Jérôme"],"suffixes":[]}],"eprint":"arXiv:2011.05847","note":"arXiv:2011.05847 [cs]","title":"A Survey and Implementation of Performance Metrics for Self-Organized Maps","year":"2020","month":"November","url_link":"https://arxiv.org/abs/2011.05847","url_paper":"https://arxiv.org/pdf/2011.05847.pdf","url_code":"https://github.com/FlorentF9/SOMperf","bibbase_note":"<img src=\"assets/img/papers/somperf.png\">","bibtex":"@unpublished{forest2020survey,\nabstract = {Self-Organizing Map algorithms have been used for almost 40 years across various application domains such as biology, geology, healthcare, industry and humanities as an interpretable tool to explore, cluster and visualize high-dimensional data sets. In every application, practitioners need to know whether they can \\textit{trust} the resulting mapping, and perform model selection to tune algorithm parameters (e.g. the map size). Quantitative evaluation of self-organizing maps (SOM) is a subset of clustering validation, which is a challenging problem as such. Clustering model selection is typically achieved by using clustering validity indices. While they also apply to self-organized clustering models, they ignore the topology of the map, only answering the question: do the SOM code vectors approximate well the data distribution? Evaluating SOM models brings in the additional challenge of assessing their topology: does the mapping preserve neighborhood relationships between the map and the original data? The problem of assessing the performance of SOM models has already been tackled quite thoroughly in literature, giving birth to a family of quality indices incorporating neighborhood constraints, called \\textit{topographic} indices. Commonly used examples of such metrics are the topographic error, neighborhood preservation or the topographic product. However, open-source implementations are almost impossible to find. This is the issue we try to solve in this work: after a survey of existing SOM performance metrics, we implemented them in Python and widely used numerical libraries, and provide them as an open-source library, SOMperf. This paper introduces each metric available in our module along with usage examples.},\narchivePrefix = {arXiv},\narxivId = {arXiv:2011.05847},\nauthor = {Forest, Florent and Lebbah, Mustapha and Azzag, Hanane and Lacaille, J{\\'{e}}r{\\^{o}}me},\neprint = {arXiv:2011.05847},\nnote = {arXiv:2011.05847 [cs]},\ntitle = {{A Survey and Implementation of Performance Metrics for Self-Organized Maps}},\nyear = {2020},\nmonth = nov,\nurl_Link = {https://arxiv.org/abs/2011.05847},\nurl_Paper = {https://arxiv.org/pdf/2011.05847.pdf},\nurl_Code = {https://github.com/FlorentF9/SOMperf},\nbibbase_note = {<img src=\"assets/img/papers/somperf.png\">}\n}\n\n","author_short":["Forest, F.","Lebbah, M.","Azzag, H.","Lacaille, J."],"key":"forest2020survey","id":"forest2020survey","bibbaseid":"forest-lebbah-azzag-lacaille-asurveyandimplementationofperformancemetricsforselforganizedmaps-2020","role":"author","urls":{" link":"https://arxiv.org/abs/2011.05847"," paper":"https://arxiv.org/pdf/2011.05847.pdf"," code":"https://github.com/FlorentF9/SOMperf"},"metadata":{"authorlinks":{"lebbah, m":"http://localhost/"}},"downloads":4},"bibtype":"unpublished","biburl":"https://florentfo.rest/files/publications.bib","creationDate":"2021-02-04T22:25:23.578Z","downloads":4,"keywords":[],"search_terms":["survey","implementation","performance","metrics","self","organized","maps","forest","lebbah","azzag","lacaille"],"title":"A Survey and Implementation of Performance Metrics for Self-Organized Maps","year":2020,"dataSources":["2g4Ka29FvkMPri79S","DgnR6pzJ98ZEp97PW","2puawT8ZAQyYRypA3","pBkCjKbyeirr5jeAd","6rNfa4Kp6dL5sGmf5","xH8ySTsEPTLou9gyR"]}