Generalizing from a Few Examples: A Survey on Few-shot Learning. Wang, Y., Yao, Q., Kwok, J. T., & Ni, L. M. ACM Computing Surveys, 53(3):63:1–63:34, June, 2020.
Generalizing from a Few Examples: A Survey on Few-shot Learning [link]Paper  doi  abstract   bibtex   
Machine learning has been highly successful in data-intensive applications but is often hampered when the data set is small. Recently, Few-shot Learning (FSL) is proposed to tackle this problem. Using prior knowledge, FSL can rapidly generalize to new tasks containing only a few samples with supervised information. In this article, we conduct a thorough survey to fully understand FSL. Starting from a formal definition of FSL, we distinguish FSL from several relevant machine learning problems. We then point out that the core issue in FSL is that the empirical risk minimizer is unreliable. Based on how prior knowledge can be used to handle this core issue, we categorize FSL methods from three perspectives: (i) data, which uses prior knowledge to augment the supervised experience; (ii) model, which uses prior knowledge to reduce the size of the hypothesis space; and (iii) algorithm, which uses prior knowledge to alter the search for the best hypothesis in the given hypothesis space. With this taxonomy, we review and discuss the pros and cons of each category. Promising directions, in the aspects of the FSL problem setups, techniques, applications, and theories, are also proposed to provide insights for future research.1
@article{wang_generalizing_2020,
	title = {Generalizing from a {Few} {Examples}: {A} {Survey} on {Few}-shot {Learning}},
	volume = {53},
	issn = {0360-0300},
	shorttitle = {Generalizing from a {Few} {Examples}},
	url = {https://doi.org/10.1145/3386252},
	doi = {10.1145/3386252},
	abstract = {Machine learning has been highly successful in data-intensive applications but is often hampered when the data set is small. Recently, Few-shot Learning (FSL) is proposed to tackle this problem. Using prior knowledge, FSL can rapidly generalize to new tasks containing only a few samples with supervised information. In this article, we conduct a thorough survey to fully understand FSL. Starting from a formal definition of FSL, we distinguish FSL from several relevant machine learning problems. We then point out that the core issue in FSL is that the empirical risk minimizer is unreliable. Based on how prior knowledge can be used to handle this core issue, we categorize FSL methods from three perspectives: (i) data, which uses prior knowledge to augment the supervised experience; (ii) model, which uses prior knowledge to reduce the size of the hypothesis space; and (iii) algorithm, which uses prior knowledge to alter the search for the best hypothesis in the given hypothesis space. With this taxonomy, we review and discuss the pros and cons of each category. Promising directions, in the aspects of the FSL problem setups, techniques, applications, and theories, are also proposed to provide insights for future research.1},
	number = {3},
	urldate = {2022-04-25},
	journal = {ACM Computing Surveys},
	author = {Wang, Yaqing and Yao, Quanming and Kwok, James T. and Ni, Lionel M.},
	month = jun,
	year = {2020},
	keywords = {Few-shot learning, low-shot learning, meta-learning, one-shot learning, prior knowledge, small sample learning},
	pages = {63:1--63:34},
}

Downloads: 0