Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models. Samek, W., Wiegand, T., & Müller, K. August, 2017. arXiv: http://arxiv.org/abs/1708.08296v1abstract bibtex With the availability of large databases and recent improvements in deep learning methodology, the performance of AI systems is reaching or even exceeding the human level on an increasing number of complex tasks. Impressive examples of this development can be found in domains such as image classification, sentiment analysis, speech understanding or strategic game playing. However, because of their nested non-linear structure, these highly successful machine learning and artificial intelligence models are usually applied in a black box manner, i.e., no information is provided about what exactly makes them arrive at their predictions. Since this lack of transparency can be a major drawback, e.g., in medical applications , the development of methods for visualizing, explaining and interpreting deep learning models has recently attracted increasing attention. This paper summarizes recent developments in this field and makes a plea for more interpretability in artificial intelligence. Furthermore, it presents two approaches to explaining predictions of deep learning models, one method which computes the sensitivity of the prediction with respect to changes in the input and one approach which meaningfully decomposes the decision in terms of the input variables. These methods are evaluated on three classification tasks.
@article{samek_explainable_2017,
title = {Explainable {Artificial} {Intelligence}: {Understanding}, {Visualizing} and {Interpreting} {Deep} {Learning} {Models}},
abstract = {With the availability of large databases and recent improvements in deep learning methodology, the performance of AI systems is reaching or even exceeding the human level on an increasing number of complex tasks. Impressive examples of this development can be found in domains such as image classification, sentiment analysis, speech understanding or strategic game playing. However, because of their nested non-linear structure, these highly successful machine learning and artificial intelligence models are usually applied in a black box manner, i.e., no information is provided about what exactly makes them arrive at their predictions. Since this lack of transparency can be a major drawback, e.g., in medical applications , the development of methods for visualizing, explaining and interpreting deep learning models has recently attracted increasing attention. This paper summarizes recent developments in this field and makes a plea for more interpretability in artificial intelligence. Furthermore, it presents two approaches to explaining predictions of deep learning models, one method which computes the sensitivity of the prediction with respect to changes in the input and one approach which meaningfully decomposes the decision in terms of the input variables. These methods are evaluated on three classification tasks.},
author = {Samek, Wojciech and Wiegand, Thomas and Müller, Klaus-Robert},
month = aug,
year = {2017},
note = {arXiv: http://arxiv.org/abs/1708.08296v1},
keywords = {⛔ No DOI found},
}
Downloads: 0
{"_id":"eTfEsyuKmn4voyE4B","bibbaseid":"samek-wiegand-mller-explainableartificialintelligenceunderstandingvisualizingandinterpretingdeeplearningmodels-2017","author_short":["Samek, W.","Wiegand, T.","Müller, K."],"bibdata":{"bibtype":"article","type":"article","title":"Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models","abstract":"With the availability of large databases and recent improvements in deep learning methodology, the performance of AI systems is reaching or even exceeding the human level on an increasing number of complex tasks. Impressive examples of this development can be found in domains such as image classification, sentiment analysis, speech understanding or strategic game playing. However, because of their nested non-linear structure, these highly successful machine learning and artificial intelligence models are usually applied in a black box manner, i.e., no information is provided about what exactly makes them arrive at their predictions. Since this lack of transparency can be a major drawback, e.g., in medical applications , the development of methods for visualizing, explaining and interpreting deep learning models has recently attracted increasing attention. This paper summarizes recent developments in this field and makes a plea for more interpretability in artificial intelligence. Furthermore, it presents two approaches to explaining predictions of deep learning models, one method which computes the sensitivity of the prediction with respect to changes in the input and one approach which meaningfully decomposes the decision in terms of the input variables. These methods are evaluated on three classification tasks.","author":[{"propositions":[],"lastnames":["Samek"],"firstnames":["Wojciech"],"suffixes":[]},{"propositions":[],"lastnames":["Wiegand"],"firstnames":["Thomas"],"suffixes":[]},{"propositions":[],"lastnames":["Müller"],"firstnames":["Klaus-Robert"],"suffixes":[]}],"month":"August","year":"2017","note":"arXiv: http://arxiv.org/abs/1708.08296v1","keywords":"⛔ No DOI found","bibtex":"@article{samek_explainable_2017,\n\ttitle = {Explainable {Artificial} {Intelligence}: {Understanding}, {Visualizing} and {Interpreting} {Deep} {Learning} {Models}},\n\tabstract = {With the availability of large databases and recent improvements in deep learning methodology, the performance of AI systems is reaching or even exceeding the human level on an increasing number of complex tasks. Impressive examples of this development can be found in domains such as image classification, sentiment analysis, speech understanding or strategic game playing. However, because of their nested non-linear structure, these highly successful machine learning and artificial intelligence models are usually applied in a black box manner, i.e., no information is provided about what exactly makes them arrive at their predictions. Since this lack of transparency can be a major drawback, e.g., in medical applications , the development of methods for visualizing, explaining and interpreting deep learning models has recently attracted increasing attention. This paper summarizes recent developments in this field and makes a plea for more interpretability in artificial intelligence. Furthermore, it presents two approaches to explaining predictions of deep learning models, one method which computes the sensitivity of the prediction with respect to changes in the input and one approach which meaningfully decomposes the decision in terms of the input variables. These methods are evaluated on three classification tasks.},\n\tauthor = {Samek, Wojciech and Wiegand, Thomas and Müller, Klaus-Robert},\n\tmonth = aug,\n\tyear = {2017},\n\tnote = {arXiv: http://arxiv.org/abs/1708.08296v1},\n\tkeywords = {⛔ No DOI found},\n}\n\n\n\n","author_short":["Samek, W.","Wiegand, T.","Müller, K."],"key":"samek_explainable_2017","id":"samek_explainable_2017","bibbaseid":"samek-wiegand-mller-explainableartificialintelligenceunderstandingvisualizingandinterpretingdeeplearningmodels-2017","role":"author","urls":{},"keyword":["⛔ No DOI found"],"metadata":{"authorlinks":{}},"html":""},"bibtype":"article","biburl":"https://bibbase.org/zotero/fsimonetta","dataSources":["pzyFFGWvxG2bs63zP"],"keywords":["⛔ no doi found"],"search_terms":["explainable","artificial","intelligence","understanding","visualizing","interpreting","deep","learning","models","samek","wiegand","müller"],"title":"Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models","year":2017}