Intuitions versus understanding: the natural and the artificial stories. March 2025.
Intuitions versus understanding: the natural and the artificial stories [link]Paper  abstract   bibtex   
Can machines have human-like features: understanding, intuition, empathy, Theory-of-Mind, and human-level general intelligence and, ultimately, consciousness? Do they ‘learn’/‘think’/‘feel’ like humans do? Do they have an understanding or intuitions? These questions have been asked by scientists, psychologists, and the public at large. This paper offers an answer to the possibility of “AI intuitions” based on an analogy between ‘natural’ and ‘artificial.’ We note that almost the same questions have been incessantly asked about intuitions in animals, young children, or people affected by different conditions. The answer offered here is ‘constitutive’: to explain some human-like features (to include intuition and understanding, except probably consciousness) means to qualify over the features of the ‘architecture’ or the ‘mechanism’ underlying these systems. The common mechanism in this case is a ‘neural network’ that processes information. AI intuitions are better explained on the grounds of this analogy between artificial and natural neural networks. To have an intuition is determined by the presence or the absence of certain structures or mechanisms. Ditto about the faculty of understanding. The claim is that intuition and understanding play complementary roles in the function of a neural network and this complementarity can be extended by analogy from natural to artificial networks. We take ‘intuition’ as a type of “competence without comprehension” (Dennett, Krakauer, Gopnik). This implies that the results of an intuitive process do not immediately follow a chain of thought, so they are not results of an entailment or reason. Hagendorff, Fabi, and Kosinki associate “AI intuition” with Kahneman’s ‘System 1’. A reasoning process is then associated with ‘System 2’. We follow a similar line of thought here based on the presence of metacognitive faculties. We assume that overwhelmingly intuitive systems do not possess a robust understanding and vice versa. Intuitive competence is not overviewed by any metacognitive mechanism (Meyniel et al.; Fleming and Daw). The presence of a metacognitive system could make the faculty of ‘understanding’ possible. In this sense, this paper takes intuition and understanding as complementary processes determined by the existence of specific metacognitive structures of the neural network. Finally, in a more formal part, the paper shows that one indicator of metacognitive skills is a certain degree of complexity based on the ‘Neural architecture search’ (NAS) and Progressive neural architecture search (PNAS) that can play the role of balancing the intuitive-comprehensive decision processes in artificial network. A future project is to strengthen this analogy and extend it into research on intuitive-comprehensive cognition in animal and yound children.
@unpublished{noauthor_intuitions_2025,
	address = {University of Bucharest, Romania},
	title = {Intuitions versus understanding: the natural and the artificial stories},
	url = {https://philevents.org/event/show/131990},
	abstract = {Can machines have human-like features: understanding, intuition, empathy, Theory-of-Mind, and human-level general intelligence and, ultimately, consciousness? Do they ‘learn’/‘think’/‘feel’ like humans do? Do they have an understanding or intuitions? These questions have been asked by scientists, psychologists, and the public at large. This paper offers an answer to the possibility of “AI intuitions” based on an analogy between ‘natural’ and ‘artificial.’ We note that almost the same questions have been incessantly asked about intuitions in animals, young children, or people affected by different conditions. 
The answer offered here is ‘constitutive’: to explain some human-like features (to include intuition and understanding, except probably consciousness) means to qualify over the features of the ‘architecture’ or the ‘mechanism’ underlying these systems. The common mechanism in this case is a ‘neural network’ that processes information. AI intuitions are better explained on the grounds of this analogy between artificial and natural neural networks. To have an intuition is determined by the presence or the absence of certain structures or mechanisms. Ditto about the faculty of understanding. The claim is that intuition and understanding play complementary roles in the function of a neural network and this complementarity can be extended by analogy from natural to artificial networks.
We take ‘intuition’ as a type of “competence without comprehension” (Dennett, Krakauer, Gopnik). This implies that the results of an intuitive process do not immediately follow a chain of thought, so they are not results of an entailment or reason. Hagendorff, Fabi, and Kosinki associate “AI intuition” with Kahneman’s ‘System 1’. A reasoning process is then associated with ‘System 2’.
We follow a similar line of thought here based on the presence of metacognitive faculties. We assume that overwhelmingly intuitive systems do not possess a robust understanding and vice versa. Intuitive competence is not overviewed by any metacognitive mechanism (Meyniel et al.; Fleming and Daw). The presence of a metacognitive system could make the faculty of ‘understanding’ possible. In this sense, this paper takes intuition and understanding as complementary processes determined by the existence of specific metacognitive structures of the neural network. Finally, in a more formal part, the paper shows that one indicator of metacognitive skills is a certain degree of complexity based on the ‘Neural architecture search’ (NAS) and Progressive neural architecture search (PNAS) that can play the role of balancing the intuitive-comprehensive decision processes in artificial network. A future project is to strengthen this analogy and extend it into research on intuitive-comprehensive cognition in animal and yound children.},
	language = {2. Philosophy of computation},
	month = mar,
	year = {2025},
}

Downloads: 0