Anomaly explanation: A review. Tchaghe, V. Y., Smits, G., & Pivert, O. Data & Knowledge Engineering, November, 2021.
Anomaly explanation: A review [link]Paper  doi  abstract   bibtex   
Anomaly detection has been studied intensively by the data mining community for several years. As a result, many methods to detect anomalies have emerged, and others are still under development. But during the recent years, anomaly detection, just like a lot of machine learning tasks, is facing a wall. This wall, erected by the lack of trust of the final users, has slowed down the usage of these algorithms in the real-world situations for which they are designed. Having the best empirical accuracy is not enough anymore; there is a need for algorithms to explain their outputs to the users in order to increase their trust. Consequently, a new expression has emerged recently: eXplainable Artificial Intelligence (XAI). This expression, which gathers all the methods that provide explanations to the output of algorithms has gained popularity, especially with the outbreak of deep learning. A lot of work has been devoted to anomaly detection in the literature, but not as much to anomaly explanation. There is so much work on anomaly detection that several reviews can be found on the topic. In contrast, we were not able to find a survey on anomaly explanation in particular, while there are a lot of surveys on XAI in general or on XAI for neural networks for example. With this paper, we want to provide a comprehensive review of the anomaly explanation field. After a brief recall of some important anomaly detection algorithms, the anomaly explanation methods that we discovered in the literature will be classified according to a taxonomy that we define. This taxonomy stems from an analysis of what is really important when trying to explain anomalies.
@article{tchaghe_anomaly_2021,
	title = {Anomaly explanation: {A} review},
	issn = {0169-023X},
	shorttitle = {Anomaly explanation},
	url = {https://www.sciencedirect.com/science/article/pii/S0169023X21000720},
	doi = {10.1016/j.datak.2021.101946},
	abstract = {Anomaly detection has been studied intensively by the data mining community for several years. As a result, many methods to detect anomalies have emerged, and others are still under development. But during the recent years, anomaly detection, just like a lot of machine learning tasks, is facing a wall. This wall, erected by the lack of trust of the final users, has slowed down the usage of these algorithms in the real-world situations for which they are designed. Having the best empirical accuracy is not enough anymore; there is a need for algorithms to explain their outputs to the users in order to increase their trust. Consequently, a new expression has emerged recently: eXplainable Artificial Intelligence (XAI). This expression, which gathers all the methods that provide explanations to the output of algorithms has gained popularity, especially with the outbreak of deep learning. A lot of work has been devoted to anomaly detection in the literature, but not as much to anomaly explanation. There is so much work on anomaly detection that several reviews can be found on the topic. In contrast, we were not able to find a survey on anomaly explanation in particular, while there are a lot of surveys on XAI in general or on XAI for neural networks for example. With this paper, we want to provide a comprehensive review of the anomaly explanation field. After a brief recall of some important anomaly detection algorithms, the anomaly explanation methods that we discovered in the literature will be classified according to a taxonomy that we define. This taxonomy stems from an analysis of what is really important when trying to explain anomalies.},
	language = {en},
	urldate = {2021-11-26},
	journal = {Data \& Knowledge Engineering},
	author = {Tchaghe, Véronne Yepmo and Smits, Grégory and Pivert, Olivier},
	month = nov,
	year = {2021},
	keywords = {Anomaly detection, Anomaly explanation, Explainable Artificial Intelligence (XAI), Interpretability, Outlier interpretation},
	pages = {101946},
}

Downloads: 0