Interpretable Prognostics with Concept Bottleneck Models. Forest, F., Rombach, K., & Fink, O. May, 2024. arXiv:2405.17575 [cs, eess, stat]
Interpretable Prognostics with Concept Bottleneck Models [link]Link  Interpretable Prognostics with Concept Bottleneck Models [pdf]Paper  Interpretable Prognostics with Concept Bottleneck Models [link]Code  doi  abstract   bibtex   3 downloads  
Deep learning approaches have recently been extensively explored for the prognostics of industrial assets. However, they still suffer from a lack of interpretability, which hinders their adoption in safety-critical applications. To improve their trustworthiness, explainable AI (XAI) techniques have been applied in prognostics, primarily to quantify the importance of input variables for predicting the remaining useful life (RUL) using post-hoc attribution methods. In this work, we propose the application of Concept Bottleneck Models (CBMs), a family of inherently interpretable neural network architectures based on concept explanations, to the task of RUL prediction. Unlike attribution methods, which explain decisions in terms of low-level input features, concepts represent high-level information that is easily understandable by users. Moreover, once verified in actual applications, CBMs enable domain experts to intervene on the concept activations at test-time. We propose using the different degradation modes of an asset as intermediate concepts. Our case studies on the New Commercial Modular AeroPropulsion System Simulation (N-CMAPSS) aircraft engine dataset for RUL prediction demonstrate that the performance of CBMs can be on par or superior to black-box models, while being more interpretable, even when the available labeled concepts are limited. Code available at https://github.com/EPFL-IMOS/concept-prognostics.

Downloads: 3