A survey on unsupervised outlier detection in high-dimensional numerical data. Zimek, A., Schubert, E., & Kriegel, H. Statistical Analysis and Data Mining: The ASA Data Science Journal, 5(5):363–387, 2012. _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1002/sam.11161
A survey on unsupervised outlier detection in high-dimensional numerical data [link]Paper  doi  abstract   bibtex   
High-dimensional data in Euclidean space pose special challenges to data mining algorithms. These challenges are often indiscriminately subsumed under the term ‘curse of dimensionality’, more concrete aspects being the so-called ‘distance concentration effect’, the presence of irrelevant attributes concealing relevant information, or simply efficiency issues. In about just the last few years, the task of unsupervised outlier detection has found new specialized solutions for tackling high-dimensional data in Euclidean space. These approaches fall under mainly two categories, namely considering or not considering subspaces (subsets of attributes) for the definition of outliers. The former are specifically addressing the presence of irrelevant attributes, the latter do consider the presence of irrelevant attributes implicitly at best but are more concerned with general issues of efficiency and effectiveness. Nevertheless, both types of specialized outlier detection algorithms tackle challenges specific to high-dimensional data. In this survey article, we discuss some important aspects of the ‘curse of dimensionality’ in detail and survey specialized algorithms for outlier detection from both categories. © 2012 Wiley Periodicals, Inc. Statistical Analysis and Data Mining, 2012
@article{zimek_survey_2012,
	title = {A survey on unsupervised outlier detection in high-dimensional numerical data},
	volume = {5},
	issn = {1932-1872},
	url = {https://onlinelibrary.wiley.com/doi/abs/10.1002/sam.11161},
	doi = {10.1002/sam.11161},
	abstract = {High-dimensional data in Euclidean space pose special challenges to data mining algorithms. These challenges are often indiscriminately subsumed under the term ‘curse of dimensionality’, more concrete aspects being the so-called ‘distance concentration effect’, the presence of irrelevant attributes concealing relevant information, or simply efficiency issues. In about just the last few years, the task of unsupervised outlier detection has found new specialized solutions for tackling high-dimensional data in Euclidean space. These approaches fall under mainly two categories, namely considering or not considering subspaces (subsets of attributes) for the definition of outliers. The former are specifically addressing the presence of irrelevant attributes, the latter do consider the presence of irrelevant attributes implicitly at best but are more concerned with general issues of efficiency and effectiveness. Nevertheless, both types of specialized outlier detection algorithms tackle challenges specific to high-dimensional data. In this survey article, we discuss some important aspects of the ‘curse of dimensionality’ in detail and survey specialized algorithms for outlier detection from both categories. © 2012 Wiley Periodicals, Inc. Statistical Analysis and Data Mining, 2012},
	language = {en},
	number = {5},
	urldate = {2021-08-06},
	journal = {Statistical Analysis and Data Mining: The ASA Data Science Journal},
	author = {Zimek, Arthur and Schubert, Erich and Kriegel, Hans-Peter},
	year = {2012},
	note = {\_eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1002/sam.11161},
	keywords = {anomalies in high-dimensional data, approximate outlier detection, correlation outlier detection, curse of dimensionality, outlier detection in high-dimensional data, subspace outlier detection},
	pages = {363--387},
}

Downloads: 0