Semi-supervised active learning for sound classification in hybrid learning environments. Han, W., Coutinho, E., Ruan, H., Li, H., Schuller, B., Yu, X., & Zhu, X. PLoS ONE, 11(9):e0162075, 9, 2016.
Semi-supervised active learning for sound classification in hybrid learning environments [link]Website  doi  abstract   bibtex   
Coping with scarcity of labeled data is a common problem in sound classification tasks. Approaches for classifying sounds are commonly based on supervised learning algorithms, which require labeled data which is often scarce and leads to models that do not generalize well. In this paper, we make an efficient combination of confidence-based Active Learning and Self-Training with the aim of minimizing the need for human annotation for sound classification model training. The proposed method pre-processes the instances that are ready for labeling by calculating their classifier confidence scores, and then delivers the candidates with lower scores to human annotators, and those with high scores are automatically labeled by the machine. We demonstrate the feasibility and efficacy of this method in two practical scenarios: pool-based and stream-based processing. Extensive experimental results indicate that our approach requires significantly less labeled instances to reach the same performance in both scenarios compared to Passive Learning, Active Learning and Self-Training. A reduction of 52.2% in human labeled instances is achieved in both of the pool-based and stream-based scenarios on a sound classification task considering 16,930 sound instances.
@article{
 title = {Semi-supervised active learning for sound classification in hybrid learning environments},
 type = {article},
 year = {2016},
 keywords = {article,journal},
 pages = {e0162075},
 volume = {11},
 websites = {http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000383680600017&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=f3ec48df247ee1138ccd8d3ba59bacc2 http://dx.plos.org/10.1371/journal.pone.},
 month = {9},
 day = {14},
 id = {2fb19fb0-efc7-3ae7-bd2c-40d1948d933b},
 created = {2020-05-29T11:51:39.251Z},
 file_attached = {true},
 profile_id = {ffa9027c-806a-3827-93a1-02c42eb146a1},
 last_modified = {2023-05-15T08:14:21.423Z},
 read = {false},
 starred = {false},
 authored = {true},
 confirmed = {true},
 hidden = {false},
 citation_key = {han2016semisupervisedenvironments},
 source_type = {article},
 folder_uuids = {99880aa7-55df-4b45-bfce-0ffc00b23ced},
 private_publication = {false},
 abstract = {Coping with scarcity of labeled data is a common problem in sound classification tasks. Approaches for classifying sounds are commonly based on supervised learning algorithms, which require labeled data which is often scarce and leads to models that do not generalize well. In this paper, we make an efficient combination of confidence-based Active Learning and Self-Training with the aim of minimizing the need for human annotation for sound classification model training. The proposed method pre-processes the instances that are ready for labeling by calculating their classifier confidence scores, and then delivers the candidates with lower scores to human annotators, and those with high scores are automatically labeled by the machine. We demonstrate the feasibility and efficacy of this method in two practical scenarios: pool-based and stream-based processing. Extensive experimental results indicate that our approach requires significantly less labeled instances to reach the same performance in both scenarios compared to Passive Learning, Active Learning and Self-Training. A reduction of 52.2% in human labeled instances is achieved in both of the pool-based and stream-based scenarios on a sound classification task considering 16,930 sound instances.},
 bibtype = {article},
 author = {Han, Wenjing and Coutinho, Eduardo and Ruan, Huabin and Li, Haifeng and Schuller, Björn and Yu, Xiaojie and Zhu, Xuan},
 editor = {Schwenker, Friedhelm},
 doi = {10.1371/journal.pone.0162075},
 journal = {PLoS ONE},
 number = {9}
}

Downloads: 0