The discovery of structural form. Kemp, C. & Tenenbaum, J. B Proceedings of the National Academy of Sciences of the United States of America, 105(31):10687–92, August, 2008.
The discovery of structural form. [link]Paper  doi  abstract   bibtex   
Algorithms for finding structure in data have become increasingly important both as tools for scientific data analysis and as models of human learning, yet they suffer from a critical limitation. Scientists discover qualitatively new forms of structure in observed data: For instance, Linnaeus recognized the hierarchical organization of biological species, and Mendeleev recognized the periodic structure of the chemical elements. Analogous insights play a pivotal role in cognitive development: Children discover that object category labels can be organized into hierarchies, friendship networks are organized into cliques, and comparative relations (e.g., "bigger than" or "better than") respect a transitive order. Standard algorithms, however, can only learn structures of a single form that must be specified in advance: For instance, algorithms for hierarchical clustering create tree structures, whereas algorithms for dimensionality-reduction create low-dimensional spaces. Here, we present a computational model that learns structures of many different forms and that discovers which form is best for a given dataset. The model makes probabilistic inferences over a space of graph grammars representing trees, linear orders, multidimensional spaces, rings, dominance hierarchies, cliques, and other forms and successfully discovers the underlying structure of a variety of physical, biological, and social domains. Our approach brings structure learning methods closer to human abilities and may lead to a deeper computational understanding of cognitive development.
@article{Kemp2008,
	title = {The discovery of structural form.},
	volume = {105},
	issn = {1091-6490},
	url = {http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2492756&tool=pmcentrez&rendertype=abstract},
	doi = {10.1073/pnas.0802631105},
	abstract = {Algorithms for finding structure in data have become increasingly important both as tools for scientific data analysis and as models of human learning, yet they suffer from a critical limitation. Scientists discover qualitatively new forms of structure in observed data: For instance, Linnaeus recognized the hierarchical organization of biological species, and Mendeleev recognized the periodic structure of the chemical elements. Analogous insights play a pivotal role in cognitive development: Children discover that object category labels can be organized into hierarchies, friendship networks are organized into cliques, and comparative relations (e.g., "bigger than" or "better than") respect a transitive order. Standard algorithms, however, can only learn structures of a single form that must be specified in advance: For instance, algorithms for hierarchical clustering create tree structures, whereas algorithms for dimensionality-reduction create low-dimensional spaces. Here, we present a computational model that learns structures of many different forms and that discovers which form is best for a given dataset. The model makes probabilistic inferences over a space of graph grammars representing trees, linear orders, multidimensional spaces, rings, dominance hierarchies, cliques, and other forms and successfully discovers the underlying structure of a variety of physical, biological, and social domains. Our approach brings structure learning methods closer to human abilities and may lead to a deeper computational understanding of cognitive development.},
	number = {31},
	journal = {Proceedings of the National Academy of Sciences of the United States of America},
	author = {Kemp, Charles and Tenenbaum, Joshua B},
	month = aug,
	year = {2008},
	pmid = {18669663},
	keywords = {Algorithms, Data Interpretation, Humans, Learning, Learning: physiology, Models, Research Design, Statistical, Theoretical},
	pages = {10687--92},
}

Downloads: 0