Explainable AI for Industry 4.0: Semantic Representation of Deep Learning Models. Terziyan, V. & Vitko, O. Procedia Computer Science, 200:216–226, January, 2022.
Explainable AI for Industry 4.0: Semantic Representation of Deep Learning Models [link]Paper  doi  abstract   bibtex   
Artificial Intelligence is an important asset of Industry 4.0. Current discoveries within machine learning and particularly in deep learning enable qualitative change within the industrial processes, applications, systems and products. However, there is an important challenge related to explainability of (and, therefore, trust to) the decisions made by the deep learning models (aka black-boxes) and their poor capacity for being integrated with each other. Explainable artificial intelligence is needed instead but without loss of effectiveness of the deep learning models. In this paper we present the transformation technique between black-box models and explainable (as well as interoperable) classifiers on the basis of semantic rules via automatic recreation of the training datasets and retraining the decision trees (explainable models) in between. Our transformation technique results to explainable rule-based classifiers with good performance and efficient training process due to embedded incremental ignorance discovery and adversarial samples (“corner cases”) generation algorithms. We have also shown the use-case scenario for such explainable and interoperable classifiers, which is collaborative condition monitoring, diagnostics and predictive maintenance of distributed (and isolated) smart industrial assets while preserving data and knowledge privacy of the users.
@article{terziyan_explainable_2022,
	series = {3rd {International} {Conference} on {Industry} 4.0 and {Smart} {Manufacturing}},
	title = {Explainable {AI} for {Industry} 4.0: {Semantic} {Representation} of {Deep} {Learning} {Models}},
	volume = {200},
	issn = {1877-0509},
	shorttitle = {Explainable {AI} for {Industry} 4.0},
	url = {https://www.sciencedirect.com/science/article/pii/S1877050922002290},
	doi = {10.1016/j.procs.2022.01.220},
	abstract = {Artificial Intelligence is an important asset of Industry 4.0. Current discoveries within machine learning and particularly in deep learning enable qualitative change within the industrial processes, applications, systems and products. However, there is an important challenge related to explainability of (and, therefore, trust to) the decisions made by the deep learning models (aka black-boxes) and their poor capacity for being integrated with each other. Explainable artificial intelligence is needed instead but without loss of effectiveness of the deep learning models. In this paper we present the transformation technique between black-box models and explainable (as well as interoperable) classifiers on the basis of semantic rules via automatic recreation of the training datasets and retraining the decision trees (explainable models) in between. Our transformation technique results to explainable rule-based classifiers with good performance and efficient training process due to embedded incremental ignorance discovery and adversarial samples (“corner cases”) generation algorithms. We have also shown the use-case scenario for such explainable and interoperable classifiers, which is collaborative condition monitoring, diagnostics and predictive maintenance of distributed (and isolated) smart industrial assets while preserving data and knowledge privacy of the users.},
	language = {en},
	urldate = {2022-03-14},
	journal = {Procedia Computer Science},
	author = {Terziyan, Vagan and Vitko, Oleksandra},
	month = jan,
	year = {2022},
	keywords = {Explainable Artificial Intelligence, Industry 4.0, deep learning, predictive maintenance, semantic web},
	pages = {216--226},
}

Downloads: 0