Algorithmic explanations in machine learning: in search for explananda. Muntean, I. 2023. Presented at IACAP 2022 and APA 2023
abstract   bibtex   
This paper evaluates the explanatory power of a class of machine-learning algorithms (MLAs) when they are used on Big-Data datasets. By acknowledging that as powerful categorization and classification tools, MLAs discover patterns in data (rather than properties of real systems or processes), this paper investigates whether MLA explain something at all, without representing a target. The upshot of this paper is to accept that MLAs explain second- or higher-order properties of Big Data. Based on an analogy between MLA and ensembles of scientific models, this paper considers model explanation as a separate issue from the accuracy of model representation. Similar to some “non-representational” models (e.g. “minimal models”, “exploratory models” etc.), some MLA can explain features present in Big Data, without representing reality or a real system. Overall, the paper argues that MLAs do not offer ‘how-actually’ explanations of real-world targets but answer some “how-possibly” questions about explananda, such as scales of representation, categories of systems, or parameters of theories. Even if MLA do not directly represent a target, they convey information about patterns in data, which are called here “quasi-target systems”. Although MLAs do not directly represent a target system (this failed connection is called “link uncertainty” by E. Sullivan), they can be explanatory (in a weaker sense than typical explanations) because they provide information about an explanandum, in this case quasi-targets. Some possible candidates for MLA explananda are considered here based mainly on the structure of MLA.
@unpublished{munteanAlgorithmicExplanationsMachine2023,
	title = {Algorithmic explanations in machine learning:  in search for explananda},
	copyright = {All rights reserved},
	abstract = {This paper evaluates the explanatory power of a class of machine-learning algorithms (MLAs) when they are used on Big-Data datasets. By acknowledging that as powerful categorization and classification tools, MLAs discover patterns in data (rather than properties of real systems or processes), this paper investigates whether MLA explain something at all, without representing a target. The upshot of this paper is to accept that MLAs explain second- or higher-order properties of Big Data. Based on an analogy between MLA and ensembles of scientific models, this paper considers model explanation as a separate issue from the accuracy of model representation. Similar to some “non-representational” models (e.g. “minimal models”, “exploratory models” etc.), some MLA can explain features present in Big Data, without representing reality or a real system. Overall, the paper argues that MLAs do not offer ‘how-actually’ explanations of real-world targets but answer some “how-possibly” questions about explananda, such as scales of representation, categories of systems, or parameters of theories. Even if MLA do not directly represent a target, they convey information about patterns in data, which are called here “quasi-target systems”.
Although MLAs do not directly represent a target system (this failed connection is called “link uncertainty” by E. Sullivan), they can be explanatory (in a weaker sense than typical explanations) because they provide information about an explanandum, in this case quasi-targets. Some possible candidates for MLA explananda are considered here based mainly on the structure of MLA.},
	language = {2. Philosophy of computation},
	author = {Muntean, Ioan},
	year = {2023},
	note = {Presented at IACAP 2022 and APA 2023},
	keywords = {1PhilSci, Scientific Change, Scientific Progress},
}

Downloads: 0