Pattern computationalism (pattern-alism) in machine ethics: a defense. February 2017. abstract bibtex Linking our morality to machines (computers, robots, algorithms, etc.) is not an easy task neither for philosophers, nor for computer scientists. The very idea that processes such as moral cognition, moral behavior, moral intuition, or moral choice could be instantiated in non-human agents, autonomous and reliable enough, looks problematic to many philosophers, especially those fond of the gap between reasoning about the “is” (facts) and the “ought” (values, norms, moral code, etc.). Are artificial, and autonomous moral agents possible even in principle? And if they are, in what sense are they fundamentally different than other artificial agents? Is computation the right tool to build moral agents with enough moral expertise and discern? If it is, what kind of computationalism is most likely to be realized by artificial moral agents? The central focus point of this paper is representation within artificial systems that implement computationally moral processes. The paper draws inspiration from recent debates about computation and representation, (G. Piccinini, Rescorla, O. Shagir) and depicts a case in which architectures used in artificial agents suggest a novel type of computation based. This paper argues for a form of computationalism applied to moral artificial agents, called “moral pattern computationalism,” similar in some respects to computational structuralism (advocated by Putnam, Chalmers, Copeland, Scheutz i.a.). The argument discounts the role of semantics plays in computation implemented in artificial moral agents and stresses the role of patterns in moral behavioral data. It engages the existing literature about the computational nature of cognition, representation (or lack thereof) in computationalism, the importance of patterns, and, finally, the role of functional interpretations.
@unpublished{PatternComputationalismPatternalism2017a,
title = {Pattern computationalism (pattern-alism) in machine ethics: a defense},
copyright = {All rights reserved},
abstract = {Linking our morality to machines (computers, robots, algorithms, etc.) is not an easy task neither for philosophers, nor for computer scientists. The very idea that processes such as moral cognition, moral behavior, moral intuition, or moral choice could be instantiated in non-human agents, autonomous and reliable enough, looks problematic to many philosophers, especially those fond of the gap between reasoning about the “is” (facts) and the “ought” (values, norms, moral code, etc.). Are artificial, and autonomous moral agents possible even in principle? And if they are, in what sense are they fundamentally different than other artificial agents? Is computation the right tool to build moral agents with enough moral expertise and discern? If it is, what kind of computationalism is most likely to be realized by artificial moral agents? The central focus point of this paper is representation within artificial systems that implement computationally moral processes. The paper draws inspiration from recent debates about computation and representation, (G. Piccinini, Rescorla, O. Shagir) and depicts a case in which architectures used in artificial agents suggest a novel type of computation based. This paper argues for a form of computationalism applied to moral artificial agents, called “moral pattern computationalism,” similar in some respects to computational structuralism (advocated by Putnam, Chalmers, Copeland, Scheutz i.a.). The argument discounts the role of semantics plays in computation implemented in artificial moral agents and stresses the role of patterns in moral behavioral data. It engages the existing literature about the computational nature of cognition, representation (or lack thereof) in computationalism, the importance of patterns, and, finally, the role of functional interpretations.},
language = {4. Applied ethics: machine ethics},
month = feb,
year = {2017},
keywords = {Machine Ethics, Moral cognition, Patterns, Structure},
}
Downloads: 0
{"_id":"6ePaJoXBcQxbPEv7E","bibbaseid":"anonymous-patterncomputationalismpatternalisminmachineethicsadefense-2017","downloads":0,"creationDate":"2019-04-04T02:05:21.910Z","title":"Pattern computationalism (pattern-alism) in machine ethics: a defense","author_short":null,"year":2017,"bibtype":"unpublished","biburl":"https://api.zotero.org/users/125019/collections/5U7LGU73/items?key=ewW1RaRuE0S1jNousA0Xuz9X&format=bibtex&limit=100","bibdata":{"bibtype":"unpublished","type":"unpublished","title":"Pattern computationalism (pattern-alism) in machine ethics: a defense","copyright":"All rights reserved","abstract":"Linking our morality to machines (computers, robots, algorithms, etc.) is not an easy task neither for philosophers, nor for computer scientists. The very idea that processes such as moral cognition, moral behavior, moral intuition, or moral choice could be instantiated in non-human agents, autonomous and reliable enough, looks problematic to many philosophers, especially those fond of the gap between reasoning about the “is” (facts) and the “ought” (values, norms, moral code, etc.). Are artificial, and autonomous moral agents possible even in principle? And if they are, in what sense are they fundamentally different than other artificial agents? Is computation the right tool to build moral agents with enough moral expertise and discern? If it is, what kind of computationalism is most likely to be realized by artificial moral agents? The central focus point of this paper is representation within artificial systems that implement computationally moral processes. The paper draws inspiration from recent debates about computation and representation, (G. Piccinini, Rescorla, O. Shagir) and depicts a case in which architectures used in artificial agents suggest a novel type of computation based. This paper argues for a form of computationalism applied to moral artificial agents, called “moral pattern computationalism,” similar in some respects to computational structuralism (advocated by Putnam, Chalmers, Copeland, Scheutz i.a.). The argument discounts the role of semantics plays in computation implemented in artificial moral agents and stresses the role of patterns in moral behavioral data. It engages the existing literature about the computational nature of cognition, representation (or lack thereof) in computationalism, the importance of patterns, and, finally, the role of functional interpretations.","language":"4. Applied ethics: machine ethics","month":"February","year":"2017","keywords":"Machine Ethics, Moral cognition, Patterns, Structure","bibtex":"@unpublished{PatternComputationalismPatternalism2017a,\n\ttitle = {Pattern computationalism (pattern-alism) in machine ethics: a defense},\n\tcopyright = {All rights reserved},\n\tabstract = {Linking our morality to machines (computers, robots, algorithms, etc.) is not an easy task neither for philosophers, nor for computer scientists. The very idea that processes such as moral cognition, moral behavior, moral intuition, or moral choice could be instantiated in non-human agents, autonomous and reliable enough, looks problematic to many philosophers, especially those fond of the gap between reasoning about the “is” (facts) and the “ought” (values, norms, moral code, etc.). Are artificial, and autonomous moral agents possible even in principle? And if they are, in what sense are they fundamentally different than other artificial agents? Is computation the right tool to build moral agents with enough moral expertise and discern? If it is, what kind of computationalism is most likely to be realized by artificial moral agents? The central focus point of this paper is representation within artificial systems that implement computationally moral processes. The paper draws inspiration from recent debates about computation and representation, (G. Piccinini, Rescorla, O. Shagir) and depicts a case in which architectures used in artificial agents suggest a novel type of computation based. This paper argues for a form of computationalism applied to moral artificial agents, called “moral pattern computationalism,” similar in some respects to computational structuralism (advocated by Putnam, Chalmers, Copeland, Scheutz i.a.). The argument discounts the role of semantics plays in computation implemented in artificial moral agents and stresses the role of patterns in moral behavioral data. It engages the existing literature about the computational nature of cognition, representation (or lack thereof) in computationalism, the importance of patterns, and, finally, the role of functional interpretations.},\n\tlanguage = {4. Applied ethics: machine ethics},\n\tmonth = feb,\n\tyear = {2017},\n\tkeywords = {Machine Ethics, Moral cognition, Patterns, Structure},\n}\n\n","key":"PatternComputationalismPatternalism2017a","id":"PatternComputationalismPatternalism2017a","bibbaseid":"anonymous-patterncomputationalismpatternalisminmachineethicsadefense-2017","role":"","urls":{},"keyword":["Machine Ethics","Moral cognition","Patterns","Structure"],"metadata":{"authorlinks":{}},"downloads":0},"search_terms":["pattern","computationalism","pattern","alism","machine","ethics","defense"],"keywords":["machine ethics","moral cognition","patterns","structure"],"authorIDs":[],"dataSources":["GPpRKkvJnTEshZncz","wjn8cK5LeFh95gDh5","387WCqHy3dhc6wxAM","m6XEq4NgafoWkPhJH","3DoWMBkcaKTznZMsx"]}