Distributed virtues’ of social robots: moral functionalism, coherentism, and metacognition. June 2025. Independent paper.abstract bibtex What kinds of virtues, if any, can we attribute to artificial agents? In particular, do social robots possess functional virtues? If so, are these virtues sufficiently similar to those of humans? Furthermore, are there ‘distributed virtues’ of hybrid agents interacting in networked environments? How should the designers and maintainers of social robots restrict or constrain the moral decisions of social robots? We address these questions by examining moral cognition and moral learning in human and artificial moral agents. For the present argument, moral cognition is a prerequisite for most moral virtues. It encompasses at least the competence to learn from and adapt to an agent’s complex environment, which comprises both factual and moral patterns. Depending on the unit of analysis for cognition, we differentiate between individual moral competencies and collective ones. In the case of social robots, we expand on the latter and develop a model of the ‘functionally-distributive’ virtues (FDV) as a specific type of collective virtue that emerges in hybrid environments (where agency is distributed among human agents, patients, and artificial agents). We argue that FDVs of social robots are not reducible to those present in each individual social robot or to what it was initially programmed to do. We assume that FDV can be defined by some factors: (a) the level of interactive learning and cognition required in the Human-Robot Interaction (Shneiderman 2022; Arora et al. 2025), (b) the demand for performance in a complex environment, (c) constraints/restrictions on the autonomy of the social robots, and (d) a form of ‘distributed responsibility’ in hybrid environments. To successfully describe a desired human-robot environment, the FDV model requires a solid ethical foundation. We choose moral functionalism along with a form of moral coherentism, as two metaethical grounds for the FDV model of social robots. The argument sets off with a specific interpretation of the Aristotelian concept of ergon as ‘function’ and emphasizes its role in defining virtues, aretai, through the ‘function argument’ (Korsgaard, Abolafia). We show why this analysis is suitable for both individual and collective virtues. From this working definition, we expand the Aristotelian ‘function argument’ into ‘moral functionalism’ (as promoted by Jackson, Pettit, Danielson, et al.), which represents a stance in metaethics rather than normative ethics. Moral functionalism accommodates most normative theories, but it is particularly well-suited as a foundation for virtue ethics. The type of moral functionalism used here (inspired by Danielson) views the individual agent as a rational cognitive system capable of learning from and adapting to a network of hybrid agents and patients (whether humans or artificial). The next step is to establish a stronger connection between the epistemic and moral virtues of social robots through ‘moral coherentism’ (Brink, Dorsey, Lynch, Sinnott-Armstrong). We connect the FDV to the idea of distributed cognition (knowledge) and epistemic coherentism. Lynch’s type of coherence for moral truth is applied to the idea of distributive virtues of moral robots. In this FDV model, the virtues of social robots are strongly correlated to their distributive cognition. We take that knowledge to be distributed in social robots, and we infer from it that they display a corresponding distributed epistemic virtue. In line with the latter, we define ‘distributive virtues’ functionally, in which different functions are distributed among agents in the hybrid environment: the human agent, the social robot, and the human moral ‘subject’ (or moral patient). Virtues are distributed among these actors in the network as functions that can be interposed and composed based on coherence. Finally, what are the advantages of the FDV model over the individual model of the HRI interaction? This is where the moral metacognitive component of the FDV model becomes a central component of control and assessment of social robots. We argue here that constraints and restrictions on the autonomy of social robots can be couched in terms of the metacognitive abilities of agents. In the DFV model, each agent develops a limited set of functional virtues that complement and cohere with the functions. To obtain an environment in which social robots can thrive and contribute, what is needed is a ‘meta-skill’ that each agent (be it human or artificial) has to develop: knowing the limit of its individual moral reasoning and its place in the distributed moral reasoning. We show in what sense this individual's metacognitive virtue accomplishes a vital function in the complex environments represented by the FDV model. We assume that a functional model of distributed virtues can be instantiated in various existing environments where social robots are deployed, achieving some partial success in areas such as education, gaming, mental health, and social media. Social robots serve as embodied intelligent virtual agents (IVAs), resulting in improved behavioral and mental healthcare (Luxton & Hudlicka). We demonstrate how FDF emerges in the context of mental healthcare, where the virtues of social robots are distributed in a way that enhances the overall environment by reducing social anxiety and social distrust. The paper will discuss examples of distributed virtues such as cooperation, trust, and care, and compare the FDV model with more mainstream models based on individual virtues.
@unpublished{noauthor_distributed_2025,
title = {Distributed virtues’ of social robots: moral functionalism, coherentism, and metacognition},
abstract = {What kinds of virtues, if any, can we attribute to artificial agents? In particular, do social robots possess functional virtues? If so, are these virtues sufficiently similar to those of humans? Furthermore, are there ‘distributed virtues’ of hybrid agents interacting in networked environments? How should the designers and maintainers of social robots restrict or constrain the moral decisions of social robots?
We address these questions by examining moral cognition and moral learning in human and artificial moral agents. For the present argument, moral cognition is a prerequisite for most moral virtues. It encompasses at least the competence to learn from and adapt to an agent’s complex environment, which comprises both factual and moral patterns.
Depending on the unit of analysis for cognition, we differentiate between individual moral competencies and collective ones.
In the case of social robots, we expand on the latter and develop a model of the ‘functionally-distributive’ virtues (FDV) as a specific type of collective virtue that emerges in hybrid environments (where agency is distributed among human agents, patients, and artificial agents). We argue that FDVs of social robots are not reducible to those present in each individual social robot or to what it was initially programmed to do.
We assume that FDV can be defined by some factors: (a) the level of interactive learning and cognition required in the Human-Robot Interaction (Shneiderman 2022; Arora et al. 2025), (b) the demand for performance in a complex environment, (c) constraints/restrictions on the autonomy of the social robots, and (d) a form of ‘distributed responsibility’ in hybrid environments.
To successfully describe a desired human-robot environment, the FDV model requires a solid ethical foundation. We choose moral functionalism along with a form of moral coherentism, as two metaethical grounds for the FDV model of social robots.
The argument sets off with a specific interpretation of the Aristotelian concept of ergon as ‘function’ and emphasizes its role in defining virtues, aretai, through the ‘function argument’ (Korsgaard, Abolafia). We show why this analysis is suitable for both individual and collective virtues.
From this working definition, we expand the Aristotelian ‘function argument’ into ‘moral functionalism’ (as promoted by Jackson, Pettit, Danielson, et al.), which represents a stance in metaethics rather than normative ethics. Moral functionalism accommodates most normative theories, but it is particularly well-suited as a foundation for virtue ethics. The type of moral functionalism used here (inspired by Danielson) views the individual agent as a rational cognitive system capable of learning from and adapting to a network of hybrid agents and patients (whether humans or artificial).
The next step is to establish a stronger connection between the epistemic and moral virtues of social robots through ‘moral coherentism’ (Brink, Dorsey, Lynch, Sinnott-Armstrong). We connect the FDV to the idea of distributed cognition (knowledge) and epistemic coherentism.
Lynch’s type of coherence for moral truth is applied to the idea of distributive virtues of moral robots. In this FDV model, the virtues of social robots are strongly correlated to their distributive cognition. We take that knowledge to be distributed in social robots, and we infer from it that they display a corresponding distributed epistemic virtue.
In line with the latter, we define ‘distributive virtues’ functionally, in which different functions are distributed among agents in the hybrid environment: the human agent, the social robot, and the human moral ‘subject’ (or moral patient). Virtues are distributed among these actors in the network as functions that can be interposed and composed based on coherence.
Finally, what are the advantages of the FDV model over the individual model of the HRI interaction? This is where the moral metacognitive component of the FDV model becomes a central component of control and assessment of social robots. We argue here that constraints and restrictions on the autonomy of social robots can be couched in terms of the metacognitive abilities of agents. In the DFV model, each agent develops a limited set of functional virtues that complement and cohere with the functions. To obtain an environment in which social robots can thrive and contribute, what is needed is a ‘meta-skill’ that each agent (be it human or artificial) has to develop: knowing the limit of its individual moral reasoning and its place in the distributed moral reasoning. We show in what sense this individual's metacognitive virtue accomplishes a vital function in the complex environments represented by the FDV model.
We assume that a functional model of distributed virtues can be instantiated in various existing environments where social robots are deployed, achieving some partial success in areas such as education, gaming, mental health, and social media. Social robots serve as embodied intelligent virtual agents (IVAs), resulting in improved behavioral and mental healthcare (Luxton \& Hudlicka). We demonstrate how FDF emerges in the context of mental healthcare, where the virtues of social robots are distributed in a way that enhances the overall environment by reducing social anxiety and social distrust. The paper will discuss examples of distributed virtues such as cooperation, trust, and care, and compare the FDV model with more mainstream models based on individual virtues.},
language = {4. Applied ethics: machine ethics},
month = jun,
year = {2025},
note = {Independent paper.},
}
Downloads: 0
{"_id":"J8rTGRzj9v6JWfWK6","bibbaseid":"anonymous-distributedvirtuesofsocialrobotsmoralfunctionalismcoherentismandmetacognition-2025","bibdata":{"bibtype":"unpublished","type":"unpublished","title":"Distributed virtues’ of social robots: moral functionalism, coherentism, and metacognition","abstract":"What kinds of virtues, if any, can we attribute to artificial agents? In particular, do social robots possess functional virtues? If so, are these virtues sufficiently similar to those of humans? Furthermore, are there ‘distributed virtues’ of hybrid agents interacting in networked environments? How should the designers and maintainers of social robots restrict or constrain the moral decisions of social robots? We address these questions by examining moral cognition and moral learning in human and artificial moral agents. For the present argument, moral cognition is a prerequisite for most moral virtues. It encompasses at least the competence to learn from and adapt to an agent’s complex environment, which comprises both factual and moral patterns. Depending on the unit of analysis for cognition, we differentiate between individual moral competencies and collective ones. In the case of social robots, we expand on the latter and develop a model of the ‘functionally-distributive’ virtues (FDV) as a specific type of collective virtue that emerges in hybrid environments (where agency is distributed among human agents, patients, and artificial agents). We argue that FDVs of social robots are not reducible to those present in each individual social robot or to what it was initially programmed to do. We assume that FDV can be defined by some factors: (a) the level of interactive learning and cognition required in the Human-Robot Interaction (Shneiderman 2022; Arora et al. 2025), (b) the demand for performance in a complex environment, (c) constraints/restrictions on the autonomy of the social robots, and (d) a form of ‘distributed responsibility’ in hybrid environments. To successfully describe a desired human-robot environment, the FDV model requires a solid ethical foundation. We choose moral functionalism along with a form of moral coherentism, as two metaethical grounds for the FDV model of social robots. The argument sets off with a specific interpretation of the Aristotelian concept of ergon as ‘function’ and emphasizes its role in defining virtues, aretai, through the ‘function argument’ (Korsgaard, Abolafia). We show why this analysis is suitable for both individual and collective virtues. From this working definition, we expand the Aristotelian ‘function argument’ into ‘moral functionalism’ (as promoted by Jackson, Pettit, Danielson, et al.), which represents a stance in metaethics rather than normative ethics. Moral functionalism accommodates most normative theories, but it is particularly well-suited as a foundation for virtue ethics. The type of moral functionalism used here (inspired by Danielson) views the individual agent as a rational cognitive system capable of learning from and adapting to a network of hybrid agents and patients (whether humans or artificial). The next step is to establish a stronger connection between the epistemic and moral virtues of social robots through ‘moral coherentism’ (Brink, Dorsey, Lynch, Sinnott-Armstrong). We connect the FDV to the idea of distributed cognition (knowledge) and epistemic coherentism. Lynch’s type of coherence for moral truth is applied to the idea of distributive virtues of moral robots. In this FDV model, the virtues of social robots are strongly correlated to their distributive cognition. We take that knowledge to be distributed in social robots, and we infer from it that they display a corresponding distributed epistemic virtue. In line with the latter, we define ‘distributive virtues’ functionally, in which different functions are distributed among agents in the hybrid environment: the human agent, the social robot, and the human moral ‘subject’ (or moral patient). Virtues are distributed among these actors in the network as functions that can be interposed and composed based on coherence. Finally, what are the advantages of the FDV model over the individual model of the HRI interaction? This is where the moral metacognitive component of the FDV model becomes a central component of control and assessment of social robots. We argue here that constraints and restrictions on the autonomy of social robots can be couched in terms of the metacognitive abilities of agents. In the DFV model, each agent develops a limited set of functional virtues that complement and cohere with the functions. To obtain an environment in which social robots can thrive and contribute, what is needed is a ‘meta-skill’ that each agent (be it human or artificial) has to develop: knowing the limit of its individual moral reasoning and its place in the distributed moral reasoning. We show in what sense this individual's metacognitive virtue accomplishes a vital function in the complex environments represented by the FDV model. We assume that a functional model of distributed virtues can be instantiated in various existing environments where social robots are deployed, achieving some partial success in areas such as education, gaming, mental health, and social media. Social robots serve as embodied intelligent virtual agents (IVAs), resulting in improved behavioral and mental healthcare (Luxton & Hudlicka). We demonstrate how FDF emerges in the context of mental healthcare, where the virtues of social robots are distributed in a way that enhances the overall environment by reducing social anxiety and social distrust. The paper will discuss examples of distributed virtues such as cooperation, trust, and care, and compare the FDV model with more mainstream models based on individual virtues.","language":"4. Applied ethics: machine ethics","month":"June","year":"2025","note":"Independent paper.","bibtex":"@unpublished{noauthor_distributed_2025,\n\ttitle = {Distributed virtues’ of social robots: moral functionalism, coherentism, and metacognition},\n\tabstract = {What kinds of virtues, if any, can we attribute to artificial agents? In particular, do social robots possess functional virtues? If so, are these virtues sufficiently similar to those of humans? Furthermore, are there ‘distributed virtues’ of hybrid agents interacting in networked environments? How should the designers and maintainers of social robots restrict or constrain the moral decisions of social robots?\nWe address these questions by examining moral cognition and moral learning in human and artificial moral agents. For the present argument, moral cognition is a prerequisite for most moral virtues. It encompasses at least the competence to learn from and adapt to an agent’s complex environment, which comprises both factual and moral patterns. \nDepending on the unit of analysis for cognition, we differentiate between individual moral competencies and collective ones. \nIn the case of social robots, we expand on the latter and develop a model of the ‘functionally-distributive’ virtues (FDV) as a specific type of collective virtue that emerges in hybrid environments (where agency is distributed among human agents, patients, and artificial agents). We argue that FDVs of social robots are not reducible to those present in each individual social robot or to what it was initially programmed to do.\nWe assume that FDV can be defined by some factors: (a) the level of interactive learning and cognition required in the Human-Robot Interaction (Shneiderman 2022; Arora et al. 2025), (b) the demand for performance in a complex environment, (c) constraints/restrictions on the autonomy of the social robots, and (d) a form of ‘distributed responsibility’ in hybrid environments.\nTo successfully describe a desired human-robot environment, the FDV model requires a solid ethical foundation. We choose moral functionalism along with a form of moral coherentism, as two metaethical grounds for the FDV model of social robots.\nThe argument sets off with a specific interpretation of the Aristotelian concept of ergon as ‘function’ and emphasizes its role in defining virtues, aretai, through the ‘function argument’ (Korsgaard, Abolafia). We show why this analysis is suitable for both individual and collective virtues.\nFrom this working definition, we expand the Aristotelian ‘function argument’ into ‘moral functionalism’ (as promoted by Jackson, Pettit, Danielson, et al.), which represents a stance in metaethics rather than normative ethics. Moral functionalism accommodates most normative theories, but it is particularly well-suited as a foundation for virtue ethics. The type of moral functionalism used here (inspired by Danielson) views the individual agent as a rational cognitive system capable of learning from and adapting to a network of hybrid agents and patients (whether humans or artificial).\nThe next step is to establish a stronger connection between the epistemic and moral virtues of social robots through ‘moral coherentism’ (Brink, Dorsey, Lynch, Sinnott-Armstrong). We connect the FDV to the idea of distributed cognition (knowledge) and epistemic coherentism.\nLynch’s type of coherence for moral truth is applied to the idea of distributive virtues of moral robots. In this FDV model, the virtues of social robots are strongly correlated to their distributive cognition. We take that knowledge to be distributed in social robots, and we infer from it that they display a corresponding distributed epistemic virtue.\nIn line with the latter, we define ‘distributive virtues’ functionally, in which different functions are distributed among agents in the hybrid environment: the human agent, the social robot, and the human moral ‘subject’ (or moral patient). Virtues are distributed among these actors in the network as functions that can be interposed and composed based on coherence. \nFinally, what are the advantages of the FDV model over the individual model of the HRI interaction? This is where the moral metacognitive component of the FDV model becomes a central component of control and assessment of social robots. We argue here that constraints and restrictions on the autonomy of social robots can be couched in terms of the metacognitive abilities of agents. In the DFV model, each agent develops a limited set of functional virtues that complement and cohere with the functions. To obtain an environment in which social robots can thrive and contribute, what is needed is a ‘meta-skill’ that each agent (be it human or artificial) has to develop: knowing the limit of its individual moral reasoning and its place in the distributed moral reasoning. We show in what sense this individual's metacognitive virtue accomplishes a vital function in the complex environments represented by the FDV model.\nWe assume that a functional model of distributed virtues can be instantiated in various existing environments where social robots are deployed, achieving some partial success in areas such as education, gaming, mental health, and social media. Social robots serve as embodied intelligent virtual agents (IVAs), resulting in improved behavioral and mental healthcare (Luxton \\& Hudlicka). We demonstrate how FDF emerges in the context of mental healthcare, where the virtues of social robots are distributed in a way that enhances the overall environment by reducing social anxiety and social distrust. The paper will discuss examples of distributed virtues such as cooperation, trust, and care, and compare the FDV model with more mainstream models based on individual virtues.},\n\tlanguage = {4. Applied ethics: machine ethics},\n\tmonth = jun,\n\tyear = {2025},\n\tnote = {Independent paper.},\n}\n\n\n\n","key":"noauthor_distributed_2025","id":"noauthor_distributed_2025","bibbaseid":"anonymous-distributedvirtuesofsocialrobotsmoralfunctionalismcoherentismandmetacognition-2025","role":"","urls":{},"metadata":{"authorlinks":{}}},"bibtype":"unpublished","biburl":"https://bibbase.org/zotero-mypublications/imuntean@gmail.com","dataSources":["XvHLTpD3ZxMPM6Y93","wjn8cK5LeFh95gDh5","fM8pc2ftKD7XakfFs"],"keywords":[],"search_terms":["distributed","virtues","social","robots","moral","functionalism","coherentism","metacognition"],"title":"Distributed virtues’ of social robots: moral functionalism, coherentism, and metacognition","year":2025}