Selecting the Number of Clusters K with a Stability Trade-off: an Internal Validation Criterion. Mourer, A.; Forest, F.; Lebbah, M.; Azzag, H.; and Lacaille, J. 2020.
Selecting the Number of Clusters K with a Stability Trade-off: an Internal Validation Criterion [link]Link  Selecting the Number of Clusters K with a Stability Trade-off: an Internal Validation Criterion [pdf]Paper  abstract   bibtex   9 downloads  
Model selection is a major challenge in non-parametric clustering. There is no universally admitted way to evaluate clustering results for the obvious reason that there is no ground truth against which results could be tested, as in supervised learning. The difficulty to find a universal evaluation criterion is a direct consequence of the fundamentally ill-defined objective of clustering. In this perspective, clustering stability has emerged as a natural and model-agnostic principle: an algorithm should find stable structures in the data. If data sets are repeatedly sampled from the same underlying distribution, an algorithm should find similar partitions. However, it turns out that stability alone is not a well-suited tool to determine the number of clusters. For instance, it is unable to detect if the number of clusters is too small. We propose a new principle for clustering validation: a good clustering should be stable, and within each cluster, there should exist no stable partition. This principle leads to a novel internal clustering validity criterion based on between-cluster and within-cluster stability, overcoming limitations of previous stability-based methods. We empirically show the superior ability of additive noise to discover structures, compared with sampling-based perturbation. We demonstrate the effectiveness of our method for selecting the number of clusters through a large number of experiments and compare it with existing evaluation methods.
@unpublished{Mourer2020,
abstract = {Model selection is a major challenge in non-parametric clustering. There is no universally admitted way to evaluate clustering results for the obvious reason that there is no ground truth against which results could be tested, as in supervised learning. The difficulty to find a universal evaluation criterion is a direct consequence of the fundamentally ill-defined objective of clustering. In this perspective, clustering stability has emerged as a natural and model-agnostic principle: an algorithm should find stable structures in the data. If data sets are repeatedly sampled from the same underlying distribution, an algorithm should find similar partitions. However, it turns out that stability alone is not a well-suited tool to determine the number of clusters. For instance, it is unable to detect if the number of clusters is too small. We propose a new principle for clustering validation: a good clustering should be stable, and within each cluster, there should exist no stable partition. This principle leads to a novel internal clustering validity criterion based on between-cluster and within-cluster stability, overcoming limitations of previous stability-based methods. We empirically show the superior ability of additive noise to discover structures, compared with sampling-based perturbation. We demonstrate the effectiveness of our method for selecting the number of clusters through a large number of experiments and compare it with existing evaluation methods.},
archivePrefix = {arXiv},
arxivId = {arXiv:2006.08530v1},
author = {Mourer, Alex and Forest, Florent and Lebbah, Mustapha and Azzag, Hanane and Lacaille, J{\'{e}}r{\^{o}}me},
eprint = {arXiv:2006.08530v1},
keywords = {validity index,clustering,model selection,stability analysis},
title = {{Selecting the Number of Clusters K with a Stability Trade-off: an Internal Validation Criterion}},
year = {2020},
url_Link = {https://arxiv.org/abs/2006.08530},
url_Paper = {https://arxiv.org/pdf/2006.08530.pdf}
}
Downloads: 9