Explainable outlier detection: What, for Whom and Why?. Sejr, J. H. & Schneider-Kamp, A. Machine Learning with Applications, October, 2021. Paper doi abstract bibtex Outlier algorithms are becoming increasingly complex. Thereby, they become much less interpretable to the data scientists applying the algorithms in real-life settings and to end-users using their predictions. We argue that outliers are context-dependent and, therefore, can only be detected via domain knowledge, algorithm insight, and interaction with end-users. As outlier detection is equivalent to unsupervised semantic binary classification, at the core of interpreting an outlier algorithm we find the semantics of the classes, i.e., the algorithm’s conceptual outlier definition. We investigate current interpretable and explainable outlier algorithms: what they are, for whom they are, and what their value proposition is. We then discuss how interpretation and explanation and user involvement have the potential to provide the missing link to bring modern complex outlier algorithms from computer science labs into real-life applications and the challenges they induce.
@article{sejr_explainable_2021,
title = {Explainable outlier detection: {What}, for {Whom} and {Why}?},
issn = {2666-8270},
shorttitle = {Explainable outlier detection},
url = {https://www.sciencedirect.com/science/article/pii/S2666827021000864},
doi = {10.1016/j.mlwa.2021.100172},
abstract = {Outlier algorithms are becoming increasingly complex. Thereby, they become much less interpretable to the data scientists applying the algorithms in real-life settings and to end-users using their predictions. We argue that outliers are context-dependent and, therefore, can only be detected via domain knowledge, algorithm insight, and interaction with end-users. As outlier detection is equivalent to unsupervised semantic binary classification, at the core of interpreting an outlier algorithm we find the semantics of the classes, i.e., the algorithm’s conceptual outlier definition. We investigate current interpretable and explainable outlier algorithms: what they are, for whom they are, and what their value proposition is. We then discuss how interpretation and explanation and user involvement have the potential to provide the missing link to bring modern complex outlier algorithms from computer science labs into real-life applications and the challenges they induce.},
language = {en},
urldate = {2021-10-04},
journal = {Machine Learning with Applications},
author = {Sejr, Jonas Herskind and Schneider-Kamp, Anna},
month = oct,
year = {2021},
keywords = {Explainable artificial intelligence, Unsupervised outlier detection},
pages = {100172},
}
Downloads: 0
{"_id":"B4HNEhcQ8KRhJnnqC","bibbaseid":"sejr-schneiderkamp-explainableoutlierdetectionwhatforwhomandwhy-2021","author_short":["Sejr, J. H.","Schneider-Kamp, A."],"bibdata":{"bibtype":"article","type":"article","title":"Explainable outlier detection: What, for Whom and Why?","issn":"2666-8270","shorttitle":"Explainable outlier detection","url":"https://www.sciencedirect.com/science/article/pii/S2666827021000864","doi":"10.1016/j.mlwa.2021.100172","abstract":"Outlier algorithms are becoming increasingly complex. Thereby, they become much less interpretable to the data scientists applying the algorithms in real-life settings and to end-users using their predictions. We argue that outliers are context-dependent and, therefore, can only be detected via domain knowledge, algorithm insight, and interaction with end-users. As outlier detection is equivalent to unsupervised semantic binary classification, at the core of interpreting an outlier algorithm we find the semantics of the classes, i.e., the algorithm’s conceptual outlier definition. We investigate current interpretable and explainable outlier algorithms: what they are, for whom they are, and what their value proposition is. We then discuss how interpretation and explanation and user involvement have the potential to provide the missing link to bring modern complex outlier algorithms from computer science labs into real-life applications and the challenges they induce.","language":"en","urldate":"2021-10-04","journal":"Machine Learning with Applications","author":[{"propositions":[],"lastnames":["Sejr"],"firstnames":["Jonas","Herskind"],"suffixes":[]},{"propositions":[],"lastnames":["Schneider-Kamp"],"firstnames":["Anna"],"suffixes":[]}],"month":"October","year":"2021","keywords":"Explainable artificial intelligence, Unsupervised outlier detection","pages":"100172","bibtex":"@article{sejr_explainable_2021,\n\ttitle = {Explainable outlier detection: {What}, for {Whom} and {Why}?},\n\tissn = {2666-8270},\n\tshorttitle = {Explainable outlier detection},\n\turl = {https://www.sciencedirect.com/science/article/pii/S2666827021000864},\n\tdoi = {10.1016/j.mlwa.2021.100172},\n\tabstract = {Outlier algorithms are becoming increasingly complex. Thereby, they become much less interpretable to the data scientists applying the algorithms in real-life settings and to end-users using their predictions. We argue that outliers are context-dependent and, therefore, can only be detected via domain knowledge, algorithm insight, and interaction with end-users. As outlier detection is equivalent to unsupervised semantic binary classification, at the core of interpreting an outlier algorithm we find the semantics of the classes, i.e., the algorithm’s conceptual outlier definition. We investigate current interpretable and explainable outlier algorithms: what they are, for whom they are, and what their value proposition is. We then discuss how interpretation and explanation and user involvement have the potential to provide the missing link to bring modern complex outlier algorithms from computer science labs into real-life applications and the challenges they induce.},\n\tlanguage = {en},\n\turldate = {2021-10-04},\n\tjournal = {Machine Learning with Applications},\n\tauthor = {Sejr, Jonas Herskind and Schneider-Kamp, Anna},\n\tmonth = oct,\n\tyear = {2021},\n\tkeywords = {Explainable artificial intelligence, Unsupervised outlier detection},\n\tpages = {100172},\n}\n\n\n\n","author_short":["Sejr, J. H.","Schneider-Kamp, A."],"key":"sejr_explainable_2021","id":"sejr_explainable_2021","bibbaseid":"sejr-schneiderkamp-explainableoutlierdetectionwhatforwhomandwhy-2021","role":"author","urls":{"Paper":"https://www.sciencedirect.com/science/article/pii/S2666827021000864"},"keyword":["Explainable artificial intelligence","Unsupervised outlier detection"],"metadata":{"authorlinks":{}},"html":""},"bibtype":"article","biburl":"https://bibbase.org/zotero/mh_lenguyen","dataSources":["SZvSgtLYdBsPSQ3NM","iwKepCrWBps7ojhDx"],"keywords":["explainable artificial intelligence","unsupervised outlier detection"],"search_terms":["explainable","outlier","detection","sejr","schneider-kamp"],"title":"Explainable outlier detection: What, for Whom and Why?","year":2021}