Probabilistic aggregation of classifiers for incremental learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), volume 4507 LNCS, pages 135-143, 2007.
abstract   bibtex   
We work with a recently proposed algorithm where an ensemble of base classifiers, combined using weighted majority voting, is used for incremental classification of data. To successfully accommodate novel information without compromising previously acquired knowledge this algorithm requires an adequate strategy to determine the voting weights. Given an instance to classify, we propose to define each voting weight as the posterior probability of the corresponding hypothesis given the instance. By operating with priors and the likelihood models the obtained weights can take into account the location of the instance in the different class-specific feature spaces but also the coverage of each class k given the classifier and the quality of the learned hypothesis. This approach can provide important improvements in the generalization performance of the resulting classifier and its ability to control the stability/plasticity tradeoff. Experiments are carried out with three real classification problems already introduced to test incremental algorithms. © Springer-Verlag Berlin Heidelberg 2007.
@inproceedings{38049115663,
    abstract = "We work with a recently proposed algorithm where an ensemble of base classifiers, combined using weighted majority voting, is used for incremental classification of data. To successfully accommodate novel information without compromising previously acquired knowledge this algorithm requires an adequate strategy to determine the voting weights. Given an instance to classify, we propose to define each voting weight as the posterior probability of the corresponding hypothesis given the instance. By operating with priors and the likelihood models the obtained weights can take into account the location of the instance in the different class-specific feature spaces but also the coverage of each class k given the classifier and the quality of the learned hypothesis. This approach can provide important improvements in the generalization performance of the resulting classifier and its ability to control the stability/plasticity tradeoff. Experiments are carried out with three real classification problems already introduced to test incremental algorithms. © Springer-Verlag Berlin Heidelberg 2007.",
    year = "2007",
    title = "Probabilistic aggregation of classifiers for incremental learning",
    volume = "4507 LNCS",
    pages = "135-143",
    booktitle = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)"
}

Downloads: 0