Abstract representations emerge naturally in neural networks trained to perform multiple tasks. Johnston, W. J. & Fusi, S. Technical Report bioRxiv, May, 2022. Section: New Results Type: article
Abstract representations emerge naturally in neural networks trained to perform multiple tasks [link]Paper  doi  abstract   bibtex   
Humans and other animals demonstrate a remarkable ability to generalize knowledge across distinct contexts and objects during natural behavior. We posit that this ability to generalize arises from a specific representational geometry, that we call abstract and that is referred to as disentangled in machine learning. These abstract representations have been observed in recent neurophysiological studies. However, it is unknown how they emerge. Here, using feedforward neural networks, we demonstrate that the learning of multiple tasks causes abstract representations to emerge, using both supervised and reinforcement learning. We show that these abstract representations enable few-sample learning and reliable generalization on novel tasks. We conclude that abstract representations of sensory and cognitive variables may emerge from the multiple behaviors that animals exhibit in the natural world, and, as a consequence, could be pervasive in high-level brain regions. We also make several specific predictions about which variables will be represented abstractly.
@techreport{johnston_abstract_2022,
	title = {Abstract representations emerge naturally in neural networks trained to perform multiple tasks},
	copyright = {© 2022, Posted by Cold Spring Harbor Laboratory. This pre-print is available under a Creative Commons License (Attribution-NonCommercial 4.0 International), CC BY-NC 4.0, as described at http://creativecommons.org/licenses/by-nc/4.0/},
	url = {https://www.biorxiv.org/content/10.1101/2021.10.20.465187v3},
	abstract = {Humans and other animals demonstrate a remarkable ability to generalize knowledge across distinct contexts and objects during natural behavior. We posit that this ability to generalize arises from a specific representational geometry, that we call abstract and that is referred to as disentangled in machine learning. These abstract representations have been observed in recent neurophysiological studies. However, it is unknown how they emerge. Here, using feedforward neural networks, we demonstrate that the learning of multiple tasks causes abstract representations to emerge, using both supervised and reinforcement learning. We show that these abstract representations enable few-sample learning and reliable generalization on novel tasks. We conclude that abstract representations of sensory and cognitive variables may emerge from the multiple behaviors that animals exhibit in the natural world, and, as a consequence, could be pervasive in high-level brain regions. We also make several specific predictions about which variables will be represented abstractly.},
	language = {en},
	urldate = {2022-05-18},
	institution = {bioRxiv},
	author = {Johnston, W. Jeffrey and Fusi, Stefano},
	month = may,
	year = {2022},
	doi = {10.1101/2021.10.20.465187},
	note = {Section: New Results
Type: article},
	pages = {2021.10.20.465187},
}

Downloads: 0