Pattern computationalism (pattern-alism) in machine ethics: a defense. February 2017.
abstract   bibtex   
Linking our morality to machines (computers, robots, algorithms, etc.) is not an easy task neither for philosophers, nor for computer scientists. The very idea that processes such as moral cognition, moral behavior, moral intuition, or moral choice could be instantiated in non-human agents, autonomous and reliable enough, looks problematic to many philosophers, especially those fond of the gap between reasoning about the “is” (facts) and the “ought” (values, norms, moral code, etc.). Are artificial, and autonomous moral agents possible even in principle? And if they are, in what sense are they fundamentally different than other artificial agents? Is computation the right tool to build moral agents with enough moral expertise and discern? If it is, what kind of computationalism is most likely to be realized by artificial moral agents? The central focus point of this paper is representation within artificial systems that implement computationally moral processes. The paper draws inspiration from recent debates about computation and representation, (G. Piccinini, Rescorla, O. Shagir) and depicts a case in which architectures used in artificial agents suggest a novel type of computation based. This paper argues for a form of computationalism applied to moral artificial agents, called “moral pattern computationalism,” similar in some respects to computational structuralism (advocated by Putnam, Chalmers, Copeland, Scheutz i.a.). The argument discounts the role of semantics plays in computation implemented in artificial moral agents and stresses the role of patterns in moral behavioral data. It engages the existing literature about the computational nature of cognition, representation (or lack thereof) in computationalism, the importance of patterns, and, finally, the role of functional interpretations.
@unpublished{noauthor_pattern_2017,
	title = {Pattern computationalism (pattern-alism) in machine ethics: a defense},
	abstract = {Linking our morality to machines (computers, robots, algorithms, etc.) is not an easy task neither for philosophers, nor for computer scientists. The very idea that processes such as moral cognition, moral behavior, moral intuition, or moral choice could be instantiated in non-human agents, autonomous and reliable enough, looks problematic to many philosophers, especially those fond of the gap between reasoning about the “is” (facts) and the “ought” (values, norms, moral code, etc.).

Are artificial, and autonomous moral agents possible even in principle? And if they are, in what sense are they fundamentally different than other artificial agents? Is computation the right tool to build moral agents with enough moral expertise and discern? If it is, what kind of computationalism is most likely to be realized by artificial moral agents?

The central focus point of this paper is representation within artificial systems that implement computationally moral processes. The paper draws inspiration from recent debates about computation and representation, (G. Piccinini, Rescorla, O. Shagir) and depicts a case in which architectures used in artificial agents suggest a novel type of computation based.

This paper argues for a form of computationalism applied to moral artificial agents, called “moral pattern computationalism,” similar in some respects to computational structuralism (advocated by Putnam, Chalmers, Copeland, Scheutz i.a.). The argument discounts the role of semantics plays in computation implemented in artificial moral agents and stresses the role of patterns in moral behavioral data. It engages the existing literature about the computational nature of cognition, representation (or lack thereof) in computationalism, the importance of patterns, and, finally, the role of functional interpretations.},
	month = feb,
	year = {2017},
	keywords = {Machine Ethics, Moral cognition, Patterns, Structure},
}

Downloads: 0