Reasoning about Explanations for Non-validation in SHACL. Ahmetaj, S., David, R., Ortiz, M., Polleres, A., Shehu, B., & Šimkus, M. In Proceedings of the 18th International Conference on Principles of Knowledge Representation and Reasoning (KR 2021), November, 2021.
Reasoning about Explanations for Non-validation in SHACL [pdf]Paper  doi  abstract   bibtex   
The Shapes Constraint Language (SHACL) is a recently standardized language for describing and validating constraints over RDF graphs. The SHACL specification describes the so-called \emphvalidation reports, which are meant to explain to the users the outcome of validating an RDF graph against a collection of constraints. Specifically, explaining the reasons why the input graph does not satisfy the constraints is challenging. In fact, the current SHACL standard leaves it open on how such explanations can be provided to the users. In this paper, inspired by works on logic-based abduction and database repairs, we study the problem of explaining non-validation of SHACL constraints. In particular, in our framework non-validation is explained using the notion of a repair, i.e.,a collection of additions and deletions whose application on an input graph results in a repaired graph that does satisfy the given SHACL constraints. We define a collection of decision problems for reasoning about explanations, possibly restricting to explanations that are minimal with respect to cardinality or set inclusion. We provide a detailed characterization of the computational complexity of those reasoning tasks, including the combined and the data complexity.

Downloads: 0