A systematic review and taxonomy of explanations in decision support and recommender systems. Nunes, I. & Jannach, D. User Modeling and User-Adapted Interaction, 27(3-5):393-444, 2017.
abstract   bibtex   
With the recent advances in the field of artificial intelligence, an increasing number of decision-making tasks are delegated to software systems. A key requirement for the success and adoption of such systems is that users must trust system choices or even fully automated decisions. To achieve this, explanation facilities have been widely investigated as a means of establishing trust in these systems since the early years of expert systems. With today's increasingly sophisticated machine learning algorithms, new challenges in the context of explanations, accountability, and trust towards such systems constantly arise. In this work, we systematically review the literature on explanations in advice-giving systems. This is a family of systems that includes recommender systems, which is one of the most successful classes of advice-giving software in practice. We investigate the purposes of explanations as well as how they are generated, presented to users, and evaluated. As a result, we derive a novel comprehensive taxonomy of aspects to be considered when designing explanation facilities for current and future decision support systems. The taxonomy includes a variety of different facets, such as explanation objective, responsiveness, content and presentation. Moreover, we identified several challenges that remain unaddressed so far, for example related to fine-grained issues associated with the presentation of explanations and how explanation facilities are evaluated.
@article{
 title = {A systematic review and taxonomy of explanations in decision support and recommender systems},
 type = {article},
 year = {2017},
 identifiers = {[object Object]},
 keywords = {Artificial intelligence,Decision support system,Expert system,Explanation,Knowledge-based system,Machine learning,Recommender system,Systematic review,Trust},
 pages = {393-444},
 volume = {27},
 id = {e9027c1f-ce69-38f9-bbc3-a402fdf7e457},
 created = {2018-04-19T18:46:55.652Z},
 file_attached = {false},
 profile_id = {2ed0fe69-06a2-3e8b-9bc9-5bdb197f1120},
 group_id = {e795dbfa-5576-3499-9c01-6574f19bf7aa},
 last_modified = {2018-12-14T12:16:32.665Z},
 read = {true},
 starred = {false},
 authored = {false},
 confirmed = {true},
 hidden = {false},
 citation_key = {Nunes2017},
 notes = {En el estudio realizado de 217 artículos, donde se analizan técnicas, herramientas, evaluaciones y fundamentos de las explicaciones, se ha propuesto una taxonomía que incluye aspectos relativos a las conclusiones del estudio y aspectos obtenidos del estado del arte. Aspectos estudiados:<br/><br/>-Contenido de la explicación. Se han definido cuatro categorías con subcategorías. Estas categorías son: <br/>1.-preferencias del usuario e input (valores de input decisivos, análisis de característica importantes, etc.), <br/>2.-proceso de inferencia de la decisión (conocimiento del dominio e inferencia, trazas, etc.)<br/>3.-información de conocimiento (características, información de otros usuarios, etc.) y complementaria y de contexto (conocimientos sobre alternativas similares, datos del contexto, etc.)<br/>4.-Output de la decisión: alternativas y sus características (pros y contras, características irrelevantes, etc.)<br/><br/>-Objetivos de las explicaciones: objetivos estudiados: transparencia, efectividad, confianza, persuasión, satisfacción, escrutabilidad, eficiencia, educación (permite al usuario aprender algo sobre el sistema - Buchnan et al. (1984)) y debugging (permite al usuario identificar qué defectos tiene el sistema - Buchnan et al. (1984)). 3 niveles:<br/>1.-Stakeholder goals: incrementar la intención de reusar el sistema.<br/>2.-Factores de calidad percibidos por el usuario: factores del sistema que pueden contribuir a incrementar la intención de reusar el sistema.<br/>3.-Propósitos de la explicación (efectividad, eficiencia, persuasión, transparencia).<br/><br/>-Presentación de la explicación (formato): <br/>1.- Basado en lenguaje natural<br/>2.- Visualización (grafos o árboles)<br/>3.- Visualización (Otros)<br/>4.- Listas<br/>5.- Argumentos<br/>6.- Logs<br/>7.- Otros (sonidos, OWL, etc.)<br/><br/>-Presentación de la explicación (perspectiva): negativa o positiva.<br/><br/>-Métodos de inferencia de decisiones:<br/>1.- Basado en conocimiento: reglas, lógica, multi-criterio, restricciones, CBR, otros.<br/>2.- Machine Learning: basado en características, filtrado colaborativo, híbrido.<br/>3.- Modelos matemáticos<br/>4.- Decisiones hechas por humanos<br/>5.- Algoritmos independientes<br/><br/>-Método de evaluación: con usuarios, otro tipo de evaluación empírica, test de aceptación, evaluación del rendimiento y casos de estudio.<br/><br/>-Por dominio de actuación de la herramienta: recomendación de contenido multimedia (películas, música, etc.), Salud, comercio electrónico, computación y robótica, energías, etc.<br/><br/>-Por el número de alternativas de de evaluación que fueron comparadas.<br/><br/>-Por las medidas utilizadas para medir la evaluación de las herramientas o técnicas: percepción subjetiva, métricas de dominio específico, puntuación en aprendizaje, interés en cada alternativa, interacción con la interfaz, etc.<br/><br/>-Por el número de participantes en el estudio con usuarios.<br/><br/>-Por el número de materias que se incluyen en el estudio con usuarios.<br/>1.-Single treatment: no se hace ninguna comparación de métodos.<br/>2.-Between-Subjects: los métodos/materias se dividen en grupos y reciben diferentes tratamientos<br/>3.-Within-subjects: todas las materias son evaluadas por todos los tratamientos.<br/><br/>-Por las conclusiones obtenidas en el estudio: resultados positivos (con respectos a las diferentes alternativas de explicación, incluyendo sin explicación), resultados negativos o neutrales.},
 private_publication = {false},
 abstract = {With the recent advances in the field of artificial intelligence, an increasing number of decision-making tasks are delegated to software systems. A key requirement for the success and adoption of such systems is that users must trust system choices or even fully automated decisions. To achieve this, explanation facilities have been widely investigated as a means of establishing trust in these systems since the early years of expert systems. With today's increasingly sophisticated machine learning algorithms, new challenges in the context of explanations, accountability, and trust towards such systems constantly arise. In this work, we systematically review the literature on explanations in advice-giving systems. This is a family of systems that includes recommender systems, which is one of the most successful classes of advice-giving software in practice. We investigate the purposes of explanations as well as how they are generated, presented to users, and evaluated. As a result, we derive a novel comprehensive taxonomy of aspects to be considered when designing explanation facilities for current and future decision support systems. The taxonomy includes a variety of different facets, such as explanation objective, responsiveness, content and presentation. Moreover, we identified several challenges that remain unaddressed so far, for example related to fine-grained issues associated with the presentation of explanations and how explanation facilities are evaluated.},
 bibtype = {article},
 author = {Nunes, Ingrid and Jannach, Dietmar},
 journal = {User Modeling and User-Adapted Interaction},
 number = {3-5}
}

Downloads: 0