Dynamic selection of the best base classifier in One versus One. Mendialdua, I.; Martínez-Otzeta, J., M.; Rodriguez-Rodriguez, I.; Ruiz-Vazquez, T.; and Sierra, B. Knowledge-Based Systems, 85:298-306, Elsevier, 9, 2015.
abstract   bibtex   
Class binarization strategies decompose the original multi-class problem into several binary sub-problems. One versus One (OVO) is one of the most popular class binarization techniques, which considers every pair of classes as a different sub-problem. Usually, the same classifier is applied to every sub-problem and then all the outputs are combined by some voting scheme. In this paper we present a novel idea where for each test instance we try to assign the best classifier in each sub-problem of OVO. To do so, we have used two simple Dynamic Classifier Selection (DCS) strategies that have not been yet used in this context. The two DCS strategies use K-NN to obtain the local region of the test-instance, and the classifier that performs the best for those instances in the local region, is selected to classify the new test instance. The difference between the two DCS strategies remains in the weight of the instance. In this paper we have also proposed a novel approach in those DCS strategies. We propose to use the K-Nearest Neighbor Equality (K-NNE) method to obtain the local accuracy. K-NNE is an extension of K-NN in which all the classes are treated independently: the K nearest neighbors belonging to each class are selected. In this way all the classes take part in the final decision. We have carried out an empirical study over several UCI databases, which shows the robustness of our proposal.
@article{
 title = {Dynamic selection of the best base classifier in One versus One},
 type = {article},
 year = {2015},
 identifiers = {[object Object]},
 keywords = {Classifier combination,Decomposition strategies,Dynamic classifier selection,Machine learning,One against One,Supervised classification},
 pages = {298-306},
 volume = {85},
 month = {9},
 publisher = {Elsevier},
 day = {1},
 id = {e60c2293-08a5-3820-b514-e04d5c36fd81},
 created = {2021-01-22T11:59:45.909Z},
 accessed = {2021-01-22},
 file_attached = {false},
 profile_id = {f67eca1d-c11e-3ca8-8fb7-763c9c95282d},
 last_modified = {2021-01-22T12:00:10.355Z},
 read = {false},
 starred = {false},
 authored = {true},
 confirmed = {false},
 hidden = {false},
 private_publication = {false},
 abstract = {Class binarization strategies decompose the original multi-class problem into several binary sub-problems. One versus One (OVO) is one of the most popular class binarization techniques, which considers every pair of classes as a different sub-problem. Usually, the same classifier is applied to every sub-problem and then all the outputs are combined by some voting scheme. In this paper we present a novel idea where for each test instance we try to assign the best classifier in each sub-problem of OVO. To do so, we have used two simple Dynamic Classifier Selection (DCS) strategies that have not been yet used in this context. The two DCS strategies use K-NN to obtain the local region of the test-instance, and the classifier that performs the best for those instances in the local region, is selected to classify the new test instance. The difference between the two DCS strategies remains in the weight of the instance. In this paper we have also proposed a novel approach in those DCS strategies. We propose to use the K-Nearest Neighbor Equality (K-NNE) method to obtain the local accuracy. K-NNE is an extension of K-NN in which all the classes are treated independently: the K nearest neighbors belonging to each class are selected. In this way all the classes take part in the final decision. We have carried out an empirical study over several UCI databases, which shows the robustness of our proposal.},
 bibtype = {article},
 author = {Mendialdua, I. and Martínez-Otzeta, J. M. and Rodriguez-Rodriguez, I. and Ruiz-Vazquez, T. and Sierra, B.},
 journal = {Knowledge-Based Systems}
}
Downloads: 0