Trust and artificial agency: a metacognitive proposal. 2022. abstract bibtex This paper argues for a form of trust (called here ‘A-trust’) in artificial agents (e.g. AI, machine learning algorithms, robots, autonomous vehicles, etc.) with a certain degree of autonomy (called ‘artificial autonomous agents’=AAA). Several requirements on A-trust are discussed: trust in the creators (humans, companies, institutions) of the AAA, trust in the science and the technology used in their design, or reliability in the training data. We argue that A-trust is better approached in epistemology as a multi-level concept in which the AAA displays certain cognitive and metacognitive dispositions. What dispositions does the AAA need to be trustworthy to humans? We adopt a form of naturalized virtue epistemology of AAA, emphasizing metacognitive dispositions of AAA important to A-trust. We relate meta cognitivism about trust to recent literature in philosophy, cognitive science, and AI: we discuss the advantages of a multi-level virtue epistemology (Sosa, Greco, Baehr, Carter), some neural models of metacognition (Nelson, Fleming, Timmermans) and relate them to the debate about uncertainty in Machine Learning (McKay, Gal, Ghahramani). Regarding the rationality of A-trust, we sketch a Bayesian model of confidence, error detection, and model uncertainty. The A-trust proposed here requires that AAA can deal in the right way with uncertainty. We emphasize the advantages of a multi-level approach to A-trust and discuss briefly the ethical implications of metacognitive requirements on A-trust.
@unpublished{TrustArtificialAgency2022,
title = {Trust and artificial agency: a metacognitive proposal},
copyright = {All rights reserved},
abstract = {This paper argues for a form of trust (called here ‘A-trust’) in artificial agents (e.g. AI, machine learning algorithms, robots, autonomous vehicles, etc.) with a certain degree of autonomy (called ‘artificial autonomous agents’=AAA). Several requirements on A-trust are discussed: trust in the creators (humans, companies, institutions) of the AAA, trust in the science and the technology used in their design, or reliability in the training data. We argue that A-trust is better approached in epistemology as a multi-level concept in which the AAA displays certain cognitive and metacognitive dispositions. What dispositions does the AAA need to be trustworthy to humans? We adopt a form of naturalized virtue epistemology of AAA, emphasizing metacognitive dispositions of AAA important to A-trust. We relate meta cognitivism about trust to recent literature in philosophy, cognitive science, and AI: we discuss the advantages of a multi-level virtue epistemology (Sosa, Greco, Baehr, Carter), some neural models of metacognition (Nelson, Fleming, Timmermans) and relate them to the debate about uncertainty in Machine Learning (McKay, Gal, Ghahramani). Regarding the rationality of A-trust, we sketch a Bayesian model of confidence, error detection, and model uncertainty. The A-trust proposed here requires that AAA can deal in the right way with uncertainty. We emphasize the advantages of a multi-level approach to A-trust and discuss briefly the ethical implications of metacognitive requirements on A-trust.},
language = {2. Philosophy of computation},
year = {2022},
keywords = {Muntean Ioan},
}
Downloads: 0
{"_id":"ukeGNd4tLBEPeonqb","bibbaseid":"anonymous-trustandartificialagencyametacognitiveproposal-2022","bibdata":{"bibtype":"unpublished","type":"unpublished","title":"Trust and artificial agency: a metacognitive proposal","copyright":"All rights reserved","abstract":"This paper argues for a form of trust (called here ‘A-trust’) in artificial agents (e.g. AI, machine learning algorithms, robots, autonomous vehicles, etc.) with a certain degree of autonomy (called ‘artificial autonomous agents’=AAA). Several requirements on A-trust are discussed: trust in the creators (humans, companies, institutions) of the AAA, trust in the science and the technology used in their design, or reliability in the training data. We argue that A-trust is better approached in epistemology as a multi-level concept in which the AAA displays certain cognitive and metacognitive dispositions. What dispositions does the AAA need to be trustworthy to humans? We adopt a form of naturalized virtue epistemology of AAA, emphasizing metacognitive dispositions of AAA important to A-trust. We relate meta cognitivism about trust to recent literature in philosophy, cognitive science, and AI: we discuss the advantages of a multi-level virtue epistemology (Sosa, Greco, Baehr, Carter), some neural models of metacognition (Nelson, Fleming, Timmermans) and relate them to the debate about uncertainty in Machine Learning (McKay, Gal, Ghahramani). Regarding the rationality of A-trust, we sketch a Bayesian model of confidence, error detection, and model uncertainty. The A-trust proposed here requires that AAA can deal in the right way with uncertainty. We emphasize the advantages of a multi-level approach to A-trust and discuss briefly the ethical implications of metacognitive requirements on A-trust.","language":"2. Philosophy of computation","year":"2022","keywords":"Muntean Ioan","bibtex":"@unpublished{TrustArtificialAgency2022,\n\ttitle = {Trust and artificial agency: a metacognitive proposal},\n\tcopyright = {All rights reserved},\n\tabstract = {This paper argues for a form of trust (called here ‘A-trust’) in artificial agents (e.g. AI, machine learning algorithms, robots, autonomous vehicles, etc.) with a certain degree of autonomy (called ‘artificial autonomous agents’=AAA). Several requirements on A-trust are discussed: trust in the creators (humans, companies, institutions) of the AAA, trust in the science and the technology used in their design, or reliability in the training data. We argue that A-trust is better approached in epistemology as a multi-level concept in which the AAA displays certain cognitive and metacognitive dispositions. What dispositions does the AAA need to be trustworthy to humans? We adopt a form of naturalized virtue epistemology of AAA, emphasizing metacognitive dispositions of AAA important to A-trust. We relate meta cognitivism about trust to recent literature in philosophy, cognitive science, and AI: we discuss the advantages of a multi-level virtue epistemology (Sosa, Greco, Baehr, Carter), some neural models of metacognition (Nelson, Fleming, Timmermans) and relate them to the debate about uncertainty in Machine Learning (McKay, Gal, Ghahramani). Regarding the rationality of A-trust, we sketch a Bayesian model of confidence, error detection, and model uncertainty. The A-trust proposed here requires that AAA can deal in the right way with uncertainty. We emphasize the advantages of a multi-level approach to A-trust and discuss briefly the ethical implications of metacognitive requirements on A-trust.},\n\tlanguage = {2. Philosophy of computation},\n\tyear = {2022},\n\tkeywords = {Muntean\tIoan},\n}\n\n","key":"TrustArtificialAgency2022","id":"TrustArtificialAgency2022","bibbaseid":"anonymous-trustandartificialagencyametacognitiveproposal-2022","role":"","urls":{},"keyword":["Muntean Ioan"],"metadata":{"authorlinks":{}}},"bibtype":"unpublished","biburl":"https://api.zotero.org/users/125019/collections/5U7LGU73/items?key=ewW1RaRuE0S1jNousA0Xuz9X&format=bibtex&limit=100","dataSources":["wjn8cK5LeFh95gDh5","GPpRKkvJnTEshZncz","3DoWMBkcaKTznZMsx"],"keywords":["muntean ioan"],"search_terms":["trust","artificial","agency","metacognitive","proposal"],"title":"Trust and artificial agency: a metacognitive proposal","year":2022}