Moral coherentism, structures, and learning in machine ethics. August 2017.
abstract   bibtex   
This paper appraises a model in machine ethics, in which moral cognition plays the pivotal role. The overall task is to argue for a form of moral coherentism as moral justification (D. Brink, D. Dorsey, G. Sayre-McCord, M. Lynch): in this case, the coherence is obtained with the structure of an artificial agent. When moral cognition is a process by which an agent acquires and improves moral knowledge (moral learning, development and reasoning by analogy are parts of this type of cognition), moral cognition becomes part of moral justification. The inferential character of such justification will be appraised and compared to existing approaches such as P. Thagard, Paul Churchland, and moreover M. Guarini. Relating cognition to justification in the case of moral statements is a challenging task. Although there are several pathways to pursue here, we propose a type of agent that combines several elements: moral datasets, patterns in moral datasets, and ultimately structures resulting in a process of learning from patterns in datasets, called here ‘learned structures.’ Machine learning is the paradigmatic example of structure building, but it is also combined in multiple competing structures. From the perspective of coherentism, this paper argues that such structures enjoy some “coherence-making” features. The artificial agent develops an internal structure (be it a neural network, a population of neural networks, decision trees, etc.) the ability to read the behavior of n human agents from data. Data represents human behavior here (depending on several factors such as culture, age, religious beliefs, etc.) Standard coherentism defines moral justification as a belief through coherence with a system of beliefs: in machine ethics we explore other alternatives to define machine moral justification. The one endorsed here is that the agent decides the moral value (moral truth) based on its computational structure (e.g. decision tree, one neural network, a population of decision trees or networks, SVM, Bayesian structures, etc.). The literature on machine ethics mentions in several contexts “moral cognition” or machine (moral) learning. (Thagard, Guarini, Muntean&Howard) The philosophical framework used here is moral coherentism (Brink, Sayre-Mccord, Dorsey, Lynch etc.), in its constructivist interpretation as a process of creating more coherent computational structures from data (e.g. neural networks, decision trees, SVMs). The paper attempts to describe the landscape of moral patterns in data based on concepts such as the partitioning of patterns and moral spaces.
@unpublished{MoralCoherentismStructures2017a,
	title = {Moral coherentism, structures, and learning in machine ethics},
	copyright = {All rights reserved},
	abstract = {This paper appraises a model in machine ethics, in which moral cognition plays the pivotal role. The overall task is to argue for a form of moral coherentism as moral justification (D. Brink, D. Dorsey, G. Sayre-McCord, M. Lynch): in this case, the coherence is obtained with the structure of an artificial agent. When moral cognition is a process by which an agent acquires and improves moral knowledge (moral learning, development and reasoning by analogy are parts of this type of cognition), moral cognition becomes part of moral justification. The inferential character of such justification will be appraised and compared to existing approaches such as P. Thagard, Paul Churchland, and moreover M. Guarini. 
Relating cognition to justification in the case of moral statements is a challenging task. Although there are several pathways to pursue here, we propose a type of agent that combines several elements: moral datasets, patterns in moral datasets, and ultimately structures resulting in a process of learning from patterns in datasets, called here ‘learned structures.’ Machine learning is the paradigmatic example of structure building, but it is also combined in multiple competing structures. From the perspective of coherentism, this paper argues that such structures enjoy some “coherence-making” features.
The artificial agent develops an internal structure (be it a neural network, a population of neural networks, decision trees, etc.) the ability to read the behavior of n human agents from data. Data represents human behavior here (depending on several factors such as culture, age, religious beliefs, etc.) Standard coherentism defines moral justification as a belief through coherence with a system of beliefs: in machine ethics we explore other alternatives to define machine moral justification. The one endorsed here is that the agent decides the moral value (moral truth) based on its computational structure (e.g. decision tree, one neural network, a population of decision trees or networks, SVM, Bayesian structures, etc.).
The literature on machine ethics mentions in several contexts “moral cognition” or machine (moral) learning. (Thagard, Guarini, Muntean\&Howard) The philosophical framework used here is moral coherentism (Brink, Sayre-Mccord, Dorsey, Lynch etc.), in its constructivist interpretation as a process of creating more coherent computational structures from data (e.g. neural networks, decision trees, SVMs). The paper attempts to describe the landscape of moral patterns in data based on concepts such as the partitioning of patterns and moral spaces.},
	month = aug,
	year = {2017},
	keywords = {Artificial Morality, Coherentism, Machine learning, Moral cognition, Moral learning, Moral machine learning, Structures},
}

Downloads: 0