Wrappers for feature subset selection. Kohavi, R. & John, G., H. Artificial Intelligence, 97(1-2):273-324, Elsevier Science Publishers Ltd., 12, 1997.
Wrappers for feature subset selection [link]Website  abstract   bibtex   
In the feature subset selection problem, a learning algorithm is faced with the problem of selecting a relevant subset of features upon which to focus its attention, while ignoring the rest. To achieve the best possible performance with a particular learning algorithm on a particular training set, a feature subset selection method should consider how the algorithm and the training set interact. We explore the relation between optimal feature subset selection and relevance. Our wrapper method searches for an optimal feature subset tailored to a particular algorithm and a domain. We study the strengths and weaknesses of the wrapper approach and show a series of improved designs. We compare the wrapper approach to induction without feature subset selection and to Relief, a filter approach to feature subset selection. Significant improvement in accuracy is achieved for some datasets for the two families of induction algorithms used: decision trees and Naive-Bayes.
@article{
 title = {Wrappers for feature subset selection},
 type = {article},
 year = {1997},
 identifiers = {[object Object]},
 keywords = {feature-selection,wrapper-method},
 pages = {273-324},
 volume = {97},
 websites = {http://dx.doi.org/10.1016/s0004-3702(97)00043-x},
 month = {12},
 publisher = {Elsevier Science Publishers Ltd.},
 id = {da928043-c149-33c5-82a1-555e36ace41f},
 created = {2018-07-12T21:32:30.269Z},
 file_attached = {false},
 profile_id = {f954d000-ce94-3da6-bd26-b983145a920f},
 group_id = {b0b145a3-980e-3ad7-a16f-c93918c606ed},
 last_modified = {2018-07-12T21:32:30.269Z},
 read = {false},
 starred = {false},
 authored = {false},
 confirmed = {true},
 hidden = {false},
 citation_key = {kohavi:wrappers1997},
 source_type = {article},
 private_publication = {false},
 abstract = {In the feature subset selection problem, a learning algorithm is faced with the problem of selecting a relevant subset of features upon which to focus its attention, while ignoring the rest. To achieve the best possible performance with a particular learning algorithm on a particular training set, a feature subset selection method should consider how the algorithm and the training set interact. We explore the relation between optimal feature subset selection and relevance. Our wrapper method searches for an optimal feature subset tailored to a particular algorithm and a domain. We study the strengths and weaknesses of the wrapper approach and show a series of improved designs. We compare the wrapper approach to induction without feature subset selection and to Relief, a filter approach to feature subset selection. Significant improvement in accuracy is achieved for some datasets for the two families of induction algorithms used: decision trees and Naive-Bayes.},
 bibtype = {article},
 author = {Kohavi, Ron and John, George H},
 journal = {Artificial Intelligence},
 number = {1-2}
}

Downloads: 0