{"_id":"vMxbQkkjucydntjBt","bibbaseid":"anonymous-baggingwithasymmetriccostsformisclassifiedandcorrectlyclassifiedexamples-2007","downloads":0,"creationDate":"2017-03-31T20:15:32.690Z","title":"Bagging with asymmetric costs for misclassified and correctly classified examples","author_short":null,"year":2007,"bibtype":"inproceedings","biburl":"https://1fichier.com/?j9cpurkmnv","bibdata":{"bibtype":"inproceedings","type":"inproceedings","abstract":"Diversity is a key characteristic to obtain advantages of combining predictors. In this paper, we propose a modification of bagging to explicitly trade off diversity and individual accuracy. The procedure consists in dividing the bootstrap replicates obtained at each iteration of the algorithm in two subsets: one consisting of the examples misclassified by the ensemble obtained at the previous iteration, and the other consisting of the examples correctly recognized. A high individual accuracy of a new classifier on the first subset increases diversity, measured as the value of the Q statistic between the new classifier and the existing classifier ensemble. A high accuracy on the second subset on the other hand, decreases diversity. We trade off between both components of the individual accuracy using a parameter λ ∈ [0, 1] that changes the cost of a misclassification on the second subset. Experiments are provided using well-known classification problems obtained from UCI. Results are also compared with boosting and bagging. © Springer-Verlag Berlin Heidelberg 2007.","year":"2007","title":"Bagging with asymmetric costs for misclassified and correctly classified examples","volume":"4756 LNCS","pages":"694-703","booktitle":"Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)","bibtex":"@inproceedings{38449085234,\n abstract = \"Diversity is a key characteristic to obtain advantages of combining predictors. In this paper, we propose a modification of bagging to explicitly trade off diversity and individual accuracy. The procedure consists in dividing the bootstrap replicates obtained at each iteration of the algorithm in two subsets: one consisting of the examples misclassified by the ensemble obtained at the previous iteration, and the other consisting of the examples correctly recognized. A high individual accuracy of a new classifier on the first subset increases diversity, measured as the value of the Q statistic between the new classifier and the existing classifier ensemble. A high accuracy on the second subset on the other hand, decreases diversity. We trade off between both components of the individual accuracy using a parameter λ ∈ [0, 1] that changes the cost of a misclassification on the second subset. Experiments are provided using well-known classification problems obtained from UCI. Results are also compared with boosting and bagging. © Springer-Verlag Berlin Heidelberg 2007.\",\n year = \"2007\",\n title = \"Bagging with asymmetric costs for misclassified and correctly classified examples\",\n volume = \"4756 LNCS\",\n pages = \"694-703\",\n booktitle = \"Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)\"\n}\n\n","key":"38449085234","id":"38449085234","bibbaseid":"anonymous-baggingwithasymmetriccostsformisclassifiedandcorrectlyclassifiedexamples-2007","urls":{},"downloads":0,"html":""},"search_terms":["bagging","asymmetric","costs","misclassified","correctly","classified","examples"],"keywords":[],"authorIDs":[],"dataSources":["gKiCRHjjC2iGthGEx"]}