Rationality and uncertainty in machine learning. A Bayesian approach. 2018.
abstract   bibtex   
What do we need to gain trust in the actions and reasons of artificial agents? We propose here several capacities and formulate some requirements that artificial agents should meet in order to be trusted qua rational agents. We link the rationality of artificial agents to three capacities: one is to reason under uncertainty, especially regarding data uncertainty. The second is fallibilism, defined roughly as the capacity to know the limits of one’s knowledge of the world. These two capacities are related to one form or another of skepticism about perception and knowledge with deep philosophical roots that go back to Plato and to the suspension of judgment (epochê). The third refers to the capacity to learn constructively, and it is less related to skepticism: the agent can build new knowledge by recombining, eliminating, or ignoring previous pieces of knowledge. The focus is on Machine Learning techniques for building artificial agents. We use a form of probabilistic decision theory (also known as Bayesian statistics) to show that some Machine Learning (ML) algorithms, more specifically artificial neural networks used in Deep Learning, may display a specific form of rationality: for artificial agents, a working definition of ‘artificial rationality’ is provided. We delineate it from a concept of human rationality inspired by scientific reasoning based on some requirements: reasoning under data uncertainty, fallibilism (being able to represent its own limits), and active/corrective learning (when existing submodels are combined to create new knowledge or existing knowledge is divided into subdomains). The techniques that can instantiate these components of rationality are based on the idea of deleting, comparing, and combining models. We offer an analysis of the dropout method and a short speculative discussion on evolutionary algorithms. It is argued that (i) comparing and (ii) combining populations of representations is a form of rationality. The aim is to shed light on the philosophy of machine learning and its relation to Bayesian decision theory. The epistemology of different techniques is used to illustrate why we do not trust some of the ML results and how this situation can be partially corrected.
@unpublished{RationalityUncertaintyMachine2018a,
	title = {Rationality and uncertainty in machine learning. {A} {Bayesian} approach},
	copyright = {All rights reserved},
	abstract = {What do we need to gain trust in the actions and reasons of artificial agents? We propose here several capacities and formulate some requirements that artificial agents should meet in order to be trusted qua rational agents. We link the rationality of artificial agents to three capacities: one is to reason under uncertainty, especially regarding data uncertainty. The second is fallibilism, defined roughly as the capacity to know the limits of one’s knowledge of the world. These two capacities are related to one form or another of skepticism about perception and knowledge with deep philosophical roots that go back to Plato and to the suspension of judgment (epochê). The third refers to the capacity to learn constructively, and it is less related to skepticism: the agent can build new knowledge by recombining, eliminating, or ignoring previous pieces of knowledge. The focus is on Machine Learning techniques for building artificial agents. We use a form of probabilistic decision theory (also known as Bayesian statistics) to show that some Machine Learning (ML) algorithms, more specifically artificial neural networks used in Deep Learning, may display a specific form of rationality: for artificial agents, a working definition of ‘artificial rationality’ is provided. We delineate it from a concept of human rationality inspired by scientific reasoning based on some requirements: reasoning under data uncertainty, fallibilism (being able to represent its own limits), and active/corrective learning (when existing submodels are combined to create new knowledge or existing knowledge is divided into subdomains). The techniques that can instantiate these components of rationality are based on the idea of deleting, comparing, and combining models. We offer an analysis of the dropout method and a short speculative discussion on evolutionary algorithms. It is argued that (i) comparing and (ii) combining populations of representations is a form of rationality. The aim is to shed light on the philosophy of machine learning and its relation to Bayesian decision theory. The epistemology of different techniques is used to illustrate why we do not trust some of the ML results and how this situation can be partially corrected.},
	language = {2. Philosophy of computation},
	year = {2018},
	keywords = {Bayesianism, Machine Learning},
}

Downloads: 0