Enhancing Few-Shot Image Classification With Unlabelled Examples. Bateni, P., Barber, J., van de Meent, J., & Wood, F. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pages 2796-2805, January, 2022.
Enhancing Few-Shot Image Classification With Unlabelled Examples [link]Arxiv  Enhancing Few-Shot Image Classification With Unlabelled Examples [link]Paper  abstract   bibtex   6 downloads  
We develop a transductive meta-learning method that uses unlabelled instances to improve few-shot image classification performance. Our approach combines a regularized Mahalanobis-distance-based soft k-means clustering procedure with a modified state of the art neural adaptive feature extractor to achieve improved test-time classification accuracy using unlabelled data. We evaluate our method on transductive few-shot learning tasks, in which the goal is to jointly predict labels for query (test) examples given a set of support (training) examples. We achieve state of the art performance on the Meta-Dataset, mini-ImageNet and tiered-ImageNet benchmarks.
@InProceedings{Bateni_2022_WACV,
    author    = {Bateni, Peyman and Barber, Jarred and van de Meent, Jan-Willem and Wood, Frank},
    title     = {Enhancing Few-Shot Image Classification With Unlabelled Examples},
    booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
    month     = {January},
    year      = {2022},
    pages     = {2796-2805},
    url_ArXiv = {https://arxiv.org/abs/2006.12245},
    url_Paper = {https://ieeexplore.ieee.org/document/9706775},
    support = {D3M,LwLL},
    abstract={We develop a transductive meta-learning method that uses unlabelled instances to improve few-shot image classification performance. Our approach combines a regularized Mahalanobis-distance-based soft k-means clustering procedure with a modified state of the art neural adaptive feature extractor to achieve improved test-time classification accuracy using unlabelled data. We evaluate our method on transductive few-shot learning tasks, in which the goal is to jointly predict labels for query (test) examples given a set of support (training) examples. We achieve state of the art performance on the Meta-Dataset, mini-ImageNet and tiered-ImageNet benchmarks.}
}

Downloads: 6