A historical perspective of explainable Artificial Intelligence. Confalonieri, R., Coba, L., Wagner, B., & Besold, T. R. WIREs Data Mining and Knowledge Discovery, 11(1):e1391, 2021. Number: 1 _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1002/widm.1391
A historical perspective of explainable Artificial Intelligence [link]Paper  doi  abstract   bibtex   
Explainability in Artificial Intelligence (AI) has been revived as a topic of active research by the need of conveying safety and trust to users in the “how” and “why” of automated decision-making in different applications such as autonomous driving, medical diagnosis, or banking and finance. While explainability in AI has recently received significant attention, the origins of this line of work go back several decades to when AI systems were mainly developed as (knowledge-based) expert systems. Since then, the definition, understanding, and implementation of explainability have been picked up in several lines of research work, namely, expert systems, machine learning, recommender systems, and in approaches to neural-symbolic learning and reasoning, mostly happening during different periods of AI history. In this article, we present a historical perspective of Explainable Artificial Intelligence. We discuss how explainability was mainly conceived in the past, how it is understood in the present and, how it might be understood in the future. We conclude the article by proposing criteria for explanations that we believe will play a crucial role in the development of human-understandable explainable systems. This article is categorized under: Fundamental Concepts of Data and Knowledge \textgreater Explainable AI Technologies \textgreater Artificial Intelligence
@article{confalonieri_historical_2021,
	title = {A historical perspective of explainable {Artificial} {Intelligence}},
	volume = {11},
	issn = {1942-4795},
	url = {https://onlinelibrary.wiley.com/doi/abs/10.1002/widm.1391},
	doi = {10.1002/widm.1391},
	abstract = {Explainability in Artificial Intelligence (AI) has been revived as a topic of active research by the need of conveying safety and trust to users in the “how” and “why” of automated decision-making in different applications such as autonomous driving, medical diagnosis, or banking and finance. While explainability in AI has recently received significant attention, the origins of this line of work go back several decades to when AI systems were mainly developed as (knowledge-based) expert systems. Since then, the definition, understanding, and implementation of explainability have been picked up in several lines of research work, namely, expert systems, machine learning, recommender systems, and in approaches to neural-symbolic learning and reasoning, mostly happening during different periods of AI history. In this article, we present a historical perspective of Explainable Artificial Intelligence. We discuss how explainability was mainly conceived in the past, how it is understood in the present and, how it might be understood in the future. We conclude the article by proposing criteria for explanations that we believe will play a crucial role in the development of human-understandable explainable systems. This article is categorized under: Fundamental Concepts of Data and Knowledge {\textgreater} Explainable AI Technologies {\textgreater} Artificial Intelligence},
	language = {en},
	number = {1},
	urldate = {2022-03-23},
	journal = {WIREs Data Mining and Knowledge Discovery},
	author = {Confalonieri, Roberto and Coba, Ludovik and Wagner, Benedikt and Besold, Tarek R.},
	year = {2021},
	note = {Number: 1
\_eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1002/widm.1391},
	keywords = {explainable AI, explainable recommender systems, interpretable machine learning, neural-symbolic reasoning},
	pages = {e1391},
}

Downloads: 0