One-shot generalization in humans revealed through a drawing task. Tiedemann, H., Morgenstern, Y., Schmidt, F., & Fleming, R. W eLife, 11:e75485, May, 2022. Publisher: eLife Sciences Publications, Ltd
One-shot generalization in humans revealed through a drawing task [link]Paper  doi  abstract   bibtex   
Humans have the amazing ability to learn new visual concepts from just a single exemplar. How we achieve this remains mysterious. State-of-the-art theories suggest observers rely on internal ‘generative models’, which not only describe observed objects, but can also synthesize novel variations. However, compelling evidence for generative models in human one-shot learning remains sparse. In most studies, participants merely compare candidate objects created by the experimenters, rather than generating their own ideas. Here, we overcame this key limitation by presenting participants with 2D ‘Exemplar’ shapes and asking them to draw their own ‘Variations’ belonging to the same class. The drawings reveal that participants inferred—and synthesized—genuine novel categories that were far more varied than mere copies. Yet, there was striking agreement between participants about which shape features were most distinctive, and these tended to be preserved in the drawn Variations. Indeed, swapping distinctive parts caused objects to swap apparent category. Our findings suggest that internal generative models are key to how humans generalize from single exemplars. When observers see a novel object for the first time, they identify its most distinctive features and infer a generative model of its shape, allowing them to mentally synthesize plausible variants.
@article{tiedemann_one-shot_2022,
	title = {One-shot generalization in humans revealed through a drawing task},
	volume = {11},
	issn = {2050-084X},
	url = {https://doi.org/10.7554/eLife.75485},
	doi = {10.7554/eLife.75485},
	abstract = {Humans have the amazing ability to learn new visual concepts from just a single exemplar. How we achieve this remains mysterious. State-of-the-art theories suggest observers rely on internal ‘generative models’, which not only describe observed objects, but can also synthesize novel variations. However, compelling evidence for generative models in human one-shot learning remains sparse. In most studies, participants merely compare candidate objects created by the experimenters, rather than generating their own ideas. Here, we overcame this key limitation by presenting participants with 2D ‘Exemplar’ shapes and asking them to draw their own ‘Variations’ belonging to the same class. The drawings reveal that participants inferred—and synthesized—genuine novel categories that were far more varied than mere copies. Yet, there was striking agreement between participants about which shape features were most distinctive, and these tended to be preserved in the drawn Variations. Indeed, swapping distinctive parts caused objects to swap apparent category. Our findings suggest that internal generative models are key to how humans generalize from single exemplars. When observers see a novel object for the first time, they identify its most distinctive features and infer a generative model of its shape, allowing them to mentally synthesize plausible variants.},
	urldate = {2022-05-12},
	journal = {eLife},
	author = {Tiedemann, Henning and Morgenstern, Yaniv and Schmidt, Filipp and Fleming, Roland W},
	editor = {Barense, Morgan and Baker, Chris I and Bainbridge, Wilma},
	month = may,
	year = {2022},
	note = {Publisher: eLife Sciences Publications, Ltd},
	keywords = {categorization, shape perception, visual perception},
	pages = {e75485},
}

Downloads: 0