Active Inference, Curiosity and Insight. Friston, K. J., Lin, M., Frith, C. D., Pezzulo, G., Hobson, J. A., & Ondobaka, S. Neural Computation, 29(10):2633–2683, October, 2017. ZSCC: 0000226
Active Inference, Curiosity and Insight [link]Paper  doi  abstract   bibtex   
This article offers a formal account of curiosity and insight in terms of active (Bayesian) inference. It deals with the dual problem of inferring states of the world and learning its statistical structure. In contrast to current trends in machine learning (e.g., deep learning), we focus on how people attain insight and understanding using just a handful of observations, which are solicited through curious behavior. We use simulations of abstract rule learning and approximate Bayesian inference to show that minimizing (expected) variational free energy leads to active sampling of novel contingencies. This epistemic behavior closes explanatory gaps in generative models of the world, thereby reducing uncertainty and satisfying curiosity. We then move from epistemic learning to model selection or structure learning to show how abductive processes emerge when agents test plausible hypotheses about symmetries (i.e., invariances or rules) in their generative models. The ensuing Bayesian model reduction evinces mechanisms associated with sleep and has all the hallmarks of “aha” moments. This formulation moves toward a computational account of consciousness in the pre-Cartesian sense of sharable knowledge (i.e., con: “together”; scire: “to know”).
@article{friston_active_2017,
	title = {Active {Inference}, {Curiosity} and {Insight}},
	volume = {29},
	issn = {0899-7667},
	url = {https://doi.org/10.1162/neco_a_00999},
	doi = {10.1162/neco_a_00999},
	abstract = {This article offers a formal account of curiosity and insight in terms of active (Bayesian) inference. It deals with the dual problem of inferring states of the world and learning its statistical structure. In contrast to current trends in machine learning (e.g., deep learning), we focus on how people attain insight and understanding using just a handful of observations, which are solicited through curious behavior. We use simulations of abstract rule learning and approximate Bayesian inference to show that minimizing (expected) variational free energy leads to active sampling of novel contingencies. This epistemic behavior closes explanatory gaps in generative models of the world, thereby reducing uncertainty and satisfying curiosity. We then move from epistemic learning to model selection or structure learning to show how abductive processes emerge when agents test plausible hypotheses about symmetries (i.e., invariances or rules) in their generative models. The ensuing Bayesian model reduction evinces mechanisms associated with sleep and has all the hallmarks of “aha” moments. This formulation moves toward a computational account of consciousness in the pre-Cartesian sense of sharable knowledge (i.e., con: “together”; scire: “to know”).},
	number = {10},
	urldate = {2021-06-05},
	journal = {Neural Computation},
	author = {Friston, Karl J. and Lin, Marco and Frith, Christopher D. and Pezzulo, Giovanni and Hobson, J. Allan and Ondobaka, Sasha},
	month = oct,
	year = {2017},
	note = {ZSCC: 0000226},
	pages = {2633--2683},
}

Downloads: 0