Dialogue Explanations for Rule-Based AI Systems. In Explainable and Transparent AI and Multi-Agent Systems, pages 59–77, Cham, 2023. Springer Nature Switzerland. [Computational Agent Responsibility]
abstract   bibtex   
The need for AI systems to explain themselves is increasingly recognised as a priority, particularly in domains where incorrect decisions can result in harm and, in the worst cases, death. Explainable Artificial Intelligence (XAI) tries to produce human-understandable explanations for AI decisions. However, most XAI systems prioritize factors such as technical complexities and research-oriented goals over end-user needs, risking information overload. This research attempts to bridge a gap in current understanding and provide insights for assisting users in comprehending the rule-based system's reasoning through dialogue. The hypothesis is that employing dialogue as a mechanism can be effective in constructing explanations. A dialogue framework for rule-based AI systems is presented, allowing the system to explain its decisions by engaging in ``Why?'' and ``Why not?'' questions and answers. We establish formal properties of this framework and present a small user study with encouraging results that compares dialogue-based explanations with proof trees produced by the AI System.

Downloads: 0