Deep Architectures for Joint Clustering and Visualization with Self-Organizing Maps. Forest, F.; Lebbah, M.; Azzag, H.; and Lacaille, J. In Workshop on Learning Data Representations for Clustering (LDRC), PAKDD, 2019.
Deep Architectures for Joint Clustering and Visualization with Self-Organizing Maps [link]Link  Deep Architectures for Joint Clustering and Visualization with Self-Organizing Maps [pdf]Paper  doi  abstract   bibtex   19 downloads  
Recent research has demonstrated how deep neural networks are able to learn representations to improve data clustering. By considering representation learning and clustering as a joint task, models learn clustering-friendly spaces and achieve superior performance, com- pared with standard two-stage approaches where dimensionality reduc- tion and clustering are performed separately. We extend this idea to topology-preserving clustering models, known as self-organizing maps (SOM). First, we present the Deep Embedded Self-Organizing Map (DE- SOM), a model composed of a fully-connected autoencoder and a custom SOM layer, where the SOM code vectors are learnt jointly with the au- toencoder weights. Then, we show that this generic architecture can be extended to image and sequence data by using convolutional and recur- rent architectures, and present variants of these models. First results demonstrate advantages of the DESOM architecture in terms of cluster- ing performance, visualization and training time.
@inproceedings{Forest2019a,
abstract = {Recent research has demonstrated how deep neural networks are able to learn representations to improve data clustering. By considering representation learning and clustering as a joint task, models learn clustering-friendly spaces and achieve superior performance, com- pared with standard two-stage approaches where dimensionality reduc- tion and clustering are performed separately. We extend this idea to topology-preserving clustering models, known as self-organizing maps (SOM). First, we present the Deep Embedded Self-Organizing Map (DE- SOM), a model composed of a fully-connected autoencoder and a custom SOM layer, where the SOM code vectors are learnt jointly with the au- toencoder weights. Then, we show that this generic architecture can be extended to image and sequence data by using convolutional and recur- rent architectures, and present variants of these models. First results demonstrate advantages of the DESOM architecture in terms of cluster- ing performance, visualization and training time.},
author = {Forest, Florent and Lebbah, Mustapha and Azzag, Hanane and Lacaille, J{\'{e}}r{\^{o}}me},
booktitle = {Workshop on Learning Data Representations for Clustering (LDRC), PAKDD},
doi = {10.1007/978-3-030-26142-9_10},
keywords = {autoencoder,clustering,deep learning,representation learning,self-organizing map},
title = {{Deep Architectures for Joint Clustering and Visualization with Self-Organizing Maps}},
year = {2019},
url_Link = {https://link.springer.com/chapter/10.1007/978-3-030-26142-9_10},
url_Paper = {LDRC-2019-DeepArchitecturesJointClusteringVisualization-full-paper.pdf}
}
Downloads: 19