Interpretable Prognostics with Concept Bottleneck Models. Forest, F., Rombach, K., & Fink, O. May, 2024. arXiv:2405.17575 [cs, eess, stat]Link Paper Code doi abstract bibtex 3 downloads Deep learning approaches have recently been extensively explored for the prognostics of industrial assets. However, they still suffer from a lack of interpretability, which hinders their adoption in safety-critical applications. To improve their trustworthiness, explainable AI (XAI) techniques have been applied in prognostics, primarily to quantify the importance of input variables for predicting the remaining useful life (RUL) using post-hoc attribution methods. In this work, we propose the application of Concept Bottleneck Models (CBMs), a family of inherently interpretable neural network architectures based on concept explanations, to the task of RUL prediction. Unlike attribution methods, which explain decisions in terms of low-level input features, concepts represent high-level information that is easily understandable by users. Moreover, once verified in actual applications, CBMs enable domain experts to intervene on the concept activations at test-time. We propose using the different degradation modes of an asset as intermediate concepts. Our case studies on the New Commercial Modular AeroPropulsion System Simulation (N-CMAPSS) aircraft engine dataset for RUL prediction demonstrate that the performance of CBMs can be on par or superior to black-box models, while being more interpretable, even when the available labeled concepts are limited. Code available at https://github.com/EPFL-IMOS/concept-prognostics.
@misc{forest2024interpretable,
title = {Interpretable {Prognostics} with {Concept} {Bottleneck} {Models}},
doi = {10.48550/arXiv.2405.17575},
abstract = {Deep learning approaches have recently been extensively explored for the prognostics of industrial assets. However, they still suffer from a lack of interpretability, which hinders their adoption in safety-critical applications. To improve their trustworthiness, explainable AI (XAI) techniques have been applied in prognostics, primarily to quantify the importance of input variables for predicting the remaining useful life (RUL) using post-hoc attribution methods. In this work, we propose the application of Concept Bottleneck Models (CBMs), a family of inherently interpretable neural network architectures based on concept explanations, to the task of RUL prediction. Unlike attribution methods, which explain decisions in terms of low-level input features, concepts represent high-level information that is easily understandable by users. Moreover, once verified in actual applications, CBMs enable domain experts to intervene on the concept activations at test-time. We propose using the different degradation modes of an asset as intermediate concepts. Our case studies on the New Commercial Modular AeroPropulsion System Simulation (N-CMAPSS) aircraft engine dataset for RUL prediction demonstrate that the performance of CBMs can be on par or superior to black-box models, while being more interpretable, even when the available labeled concepts are limited. Code available at https://github.com/EPFL-IMOS/concept-prognostics.},
publisher = {arXiv},
author = {Forest, Florent and Rombach, Katharina and Fink, Olga},
month = may,
year = {2024},
note = {arXiv:2405.17575 [cs, eess, stat]},
url_Link = {http://arxiv.org/abs/2405.17575},
url_Paper = {http://arxiv.org/pdf/2405.17575.pdf},
url_Code = {https://github.com/EPFL-IMOS/concept-prognostics},
bibbase_note = {<img src="assets/img/papers/concept-prognostics.png">}
}
Downloads: 3
{"_id":"txQchwiGsagnpcthC","bibbaseid":"forest-rombach-fink-interpretableprognosticswithconceptbottleneckmodels-2024","author_short":["Forest, F.","Rombach, K.","Fink, O."],"bibdata":{"bibtype":"misc","type":"misc","title":"Interpretable Prognostics with Concept Bottleneck Models","doi":"10.48550/arXiv.2405.17575","abstract":"Deep learning approaches have recently been extensively explored for the prognostics of industrial assets. However, they still suffer from a lack of interpretability, which hinders their adoption in safety-critical applications. To improve their trustworthiness, explainable AI (XAI) techniques have been applied in prognostics, primarily to quantify the importance of input variables for predicting the remaining useful life (RUL) using post-hoc attribution methods. In this work, we propose the application of Concept Bottleneck Models (CBMs), a family of inherently interpretable neural network architectures based on concept explanations, to the task of RUL prediction. Unlike attribution methods, which explain decisions in terms of low-level input features, concepts represent high-level information that is easily understandable by users. Moreover, once verified in actual applications, CBMs enable domain experts to intervene on the concept activations at test-time. We propose using the different degradation modes of an asset as intermediate concepts. Our case studies on the New Commercial Modular AeroPropulsion System Simulation (N-CMAPSS) aircraft engine dataset for RUL prediction demonstrate that the performance of CBMs can be on par or superior to black-box models, while being more interpretable, even when the available labeled concepts are limited. Code available at https://github.com/EPFL-IMOS/concept-prognostics.","publisher":"arXiv","author":[{"propositions":[],"lastnames":["Forest"],"firstnames":["Florent"],"suffixes":[]},{"propositions":[],"lastnames":["Rombach"],"firstnames":["Katharina"],"suffixes":[]},{"propositions":[],"lastnames":["Fink"],"firstnames":["Olga"],"suffixes":[]}],"month":"May","year":"2024","note":"arXiv:2405.17575 [cs, eess, stat]","url_link":"http://arxiv.org/abs/2405.17575","url_paper":"http://arxiv.org/pdf/2405.17575.pdf","url_code":"https://github.com/EPFL-IMOS/concept-prognostics","bibbase_note":"<img src=\"assets/img/papers/concept-prognostics.png\">","bibtex":"@misc{forest2024interpretable,\n\ttitle = {Interpretable {Prognostics} with {Concept} {Bottleneck} {Models}},\n\tdoi = {10.48550/arXiv.2405.17575},\n\tabstract = {Deep learning approaches have recently been extensively explored for the prognostics of industrial assets. However, they still suffer from a lack of interpretability, which hinders their adoption in safety-critical applications. To improve their trustworthiness, explainable AI (XAI) techniques have been applied in prognostics, primarily to quantify the importance of input variables for predicting the remaining useful life (RUL) using post-hoc attribution methods. In this work, we propose the application of Concept Bottleneck Models (CBMs), a family of inherently interpretable neural network architectures based on concept explanations, to the task of RUL prediction. Unlike attribution methods, which explain decisions in terms of low-level input features, concepts represent high-level information that is easily understandable by users. Moreover, once verified in actual applications, CBMs enable domain experts to intervene on the concept activations at test-time. We propose using the different degradation modes of an asset as intermediate concepts. Our case studies on the New Commercial Modular AeroPropulsion System Simulation (N-CMAPSS) aircraft engine dataset for RUL prediction demonstrate that the performance of CBMs can be on par or superior to black-box models, while being more interpretable, even when the available labeled concepts are limited. Code available at https://github.com/EPFL-IMOS/concept-prognostics.},\n\tpublisher = {arXiv},\n\tauthor = {Forest, Florent and Rombach, Katharina and Fink, Olga},\n\tmonth = may,\n\tyear = {2024},\n\tnote = {arXiv:2405.17575 [cs, eess, stat]},\n\turl_Link = {http://arxiv.org/abs/2405.17575},\n\turl_Paper = {http://arxiv.org/pdf/2405.17575.pdf},\n\turl_Code = {https://github.com/EPFL-IMOS/concept-prognostics},\n\tbibbase_note = {<img src=\"assets/img/papers/concept-prognostics.png\">}\n}\n\n","author_short":["Forest, F.","Rombach, K.","Fink, O."],"key":"forest2024interpretable","id":"forest2024interpretable","bibbaseid":"forest-rombach-fink-interpretableprognosticswithconceptbottleneckmodels-2024","role":"author","urls":{" link":"http://arxiv.org/abs/2405.17575"," paper":"http://arxiv.org/pdf/2405.17575.pdf"," code":"https://github.com/EPFL-IMOS/concept-prognostics"},"metadata":{"authorlinks":{}},"downloads":3},"bibtype":"misc","biburl":"https://florentfo.rest/files/publications.bib","dataSources":["2puawT8ZAQyYRypA3","DgnR6pzJ98ZEp97PW","pBkCjKbyeirr5jeAd"],"keywords":[],"search_terms":["interpretable","prognostics","concept","bottleneck","models","forest","rombach","fink"],"title":"Interpretable Prognostics with Concept Bottleneck Models","year":2024,"downloads":3}