Dynamic selection of the best base classifier in One versus One. Mendialdua, I., Martínez-Otzeta, J., M., Rodriguez-Rodriguez, I., Ruiz-Vazquez, T., & Sierra, B. Knowledge-Based Systems, 85:298-306, Elsevier, 9, 2015.
doi  abstract   bibtex   
Class binarization strategies decompose the original multi-class problem into several binary sub-problems. One versus One (OVO) is one of the most popular class binarization techniques, which considers every pair of classes as a different sub-problem. Usually, the same classifier is applied to every sub-problem and then all the outputs are combined by some voting scheme. In this paper we present a novel idea where for each test instance we try to assign the best classifier in each sub-problem of OVO. To do so, we have used two simple Dynamic Classifier Selection (DCS) strategies that have not been yet used in this context. The two DCS strategies use K-NN to obtain the local region of the test-instance, and the classifier that performs the best for those instances in the local region, is selected to classify the new test instance. The difference between the two DCS strategies remains in the weight of the instance. In this paper we have also proposed a novel approach in those DCS strategies. We propose to use the K-Nearest Neighbor Equality (K-NNE) method to obtain the local accuracy. K-NNE is an extension of K-NN in which all the classes are treated independently: the K nearest neighbors belonging to each class are selected. In this way all the classes take part in the final decision. We have carried out an empirical study over several UCI databases, which shows the robustness of our proposal.
@article{
 title = {Dynamic selection of the best base classifier in One versus One},
 type = {article},
 year = {2015},
 keywords = {Classifier combination,Decomposition strategies,Dynamic classifier selection,Machine learning,One against One,Supervised classification},
 pages = {298-306},
 volume = {85},
 month = {9},
 publisher = {Elsevier},
 day = {1},
 id = {249003e8-77ea-3f0a-8e37-dc92ce992a69},
 created = {2022-03-15T15:47:33.923Z},
 accessed = {2022-03-15},
 file_attached = {false},
 profile_id = {f67eca1d-c11e-3ca8-8fb7-763c9c95282d},
 last_modified = {2022-03-15T15:48:56.831Z},
 read = {false},
 starred = {false},
 authored = {true},
 confirmed = {false},
 hidden = {false},
 private_publication = {false},
 abstract = {Class binarization strategies decompose the original multi-class problem into several binary sub-problems. One versus One (OVO) is one of the most popular class binarization techniques, which considers every pair of classes as a different sub-problem. Usually, the same classifier is applied to every sub-problem and then all the outputs are combined by some voting scheme. In this paper we present a novel idea where for each test instance we try to assign the best classifier in each sub-problem of OVO. To do so, we have used two simple Dynamic Classifier Selection (DCS) strategies that have not been yet used in this context. The two DCS strategies use K-NN to obtain the local region of the test-instance, and the classifier that performs the best for those instances in the local region, is selected to classify the new test instance. The difference between the two DCS strategies remains in the weight of the instance. In this paper we have also proposed a novel approach in those DCS strategies. We propose to use the K-Nearest Neighbor Equality (K-NNE) method to obtain the local accuracy. K-NNE is an extension of K-NN in which all the classes are treated independently: the K nearest neighbors belonging to each class are selected. In this way all the classes take part in the final decision. We have carried out an empirical study over several UCI databases, which shows the robustness of our proposal.},
 bibtype = {article},
 author = {Mendialdua, I. and Martínez-Otzeta, J. M. and Rodriguez-Rodriguez, I. and Ruiz-Vazquez, T. and Sierra, B.},
 doi = {10.1016/J.KNOSYS.2015.05.015},
 journal = {Knowledge-Based Systems}
}

Downloads: 0