Embedded local feature selection within mixture of experts. Peralta, B. & Soto, A. Information Sciences, 269:176-187, 2014. Paper abstract bibtex 9 downloads A useful strategy to deal with complex classification scenarios is the divide and conquer approach. The mixture of experts (MoE) technique makes use of this strategy by jointly training a set of classifiers, or experts, that are specialized in different regions of the input space. A global model, or gate function, complements the experts by learning a function that weighs their relevance in different parts of the input space. Local feature selection appears as an attractive alternative to improve the specialization of experts and gate function, particularly, in the case of high dimensional data. In general, subsets of dimensions, or subspaces, are usually more appropriate to classify instances located in different regions of the input space. Accordingly, this work contributes with a regularized variant of MoE that incorporates an embedded process for local feature selection using L1-regularization. Experiments using artificial and real-world datasets provide evidence that the proposed method improves the classical MoE technique, in terms of accuracy and sparseness of the solution. Furthermore, our results indicate that the advantages of the proposed technique increase with the dimensionality of the data.
@Article{ peralta:soto:2014,
author = {B. Peralta and A. Soto},
title = {Embedded local feature selection within mixture of
experts},
journal = {Information Sciences},
volume = {269},
pages = {176-187},
year = {2014},
abstract = {A useful strategy to deal with complex classification
scenarios is the divide and conquer approach. The mixture
of experts (MoE) technique makes use of this strategy by
jointly training a set of classifiers, or experts, that are
specialized in different regions of the input space. A
global model, or gate function, complements the experts by
learning a function that weighs their relevance in
different parts of the input space. Local feature selection
appears as an attractive alternative to improve the
specialization of experts and gate function, particularly,
in the case of high dimensional data. In general, subsets
of dimensions, or subspaces, are usually more appropriate
to classify instances located in different regions of the
input space. Accordingly, this work contributes with a
regularized variant of MoE that incorporates an embedded
process for local feature selection using
L1-regularization. Experiments using artificial and
real-world datasets provide evidence that the proposed
method improves the classical MoE technique, in terms of
accuracy and sparseness of the solution. Furthermore, our
results indicate that the advantages of the proposed
technique increase with the dimensionality of the data.},
url = {http://saturno.ing.puc.cl/media/papers_alvaro/RMoE.pdf}
}
Downloads: 9
{"_id":{"_str":"53427a470e946d920a0018c0"},"__v":1,"authorIDs":["32ZR23o2BFySHbtQK","3ear6KFZSRqbj6YeT","4Pq6KLaQ8jKGXHZWH","54578d9a2abc8e9f370004f0","5e126ca5a4cabfdf01000053","5e158f76f1f31adf01000118","5e16174bf67f7dde010003ad","5e1f631ae8f5ddde010000eb","5e1f7182e8f5ddde010001ff","5e26da3642065ede01000066","5e3acefaf2a00cdf010001c8","5e62c3aecb259cde010000f9","5e65830c6e5f4cf3010000e7","5e666dfc46e828de010002c9","6cMBYieMJhf6Nd58M","6w6sGsxYSK2Quk6yZ","7xDcntrrtC62vkWM5","ARw5ReidxxZii9TTZ","BjzM7QpRCG7uCF7Zf","DQ4JRTTWkvKXtCNCp","GbYBJvxugXMriQwbi","HhRoRmBvwWfD4oLyK","JFk6x26H6LZMoht2n","JvArGGu5qM6EvSCvB","LpqQBhFH3PxepH9KY","MT4TkSGzAp69M3dGt","QFECgvB5v2i4j2Qzs","RKv56Kes3h6FwEa55","Rb9TkQ3KkhGAaNyXq","RdND8NxcJDsyZdkcK","SpKJ5YujbHKZnHc4v","TSRdcx4bbYKqcGbDg","W8ogS2GJa6sQKy26c","WTi3X2fT8dzBN5d8b","WfZbctNQYDBaiYW6n","XZny8xuqwfoxzhBCB","Xk2Q5qedS5MFHvjEW","bbARiTJLYS79ZMFbk","cBxsyeZ37EucQeBYK","cFyFQps7W3Sa2Wope","dGRBfr8zhMmbwK6eP","eRLgwkrEk7T7Lmzmf","fMYSCX8RMZap548vv","g6iKCQCFnJgKYYHaP","h2hTcQYuf2PB3oF8t","h83jBvZYJPJGutQrs","jAtuJBcGhng4Lq2Nd","pMoo2gotJcdDPwfrw","q5Zunk5Y2ruhw5vyq","rzNGhqxkbt2MvGY29","uC8ATA8AfngWpYLBq","uoJ7BKv28Q6TtPmPp","vMiJzqEKCsBxBEa3v","vQE6iTPpjxpuLip2Z","wQDRsDjhgpMJDGxWX","wbNg79jvDpzX9zHLK","wk86BgRiooBjy323E","zCbPxKnQGgDHiHMWn","zf9HENjsAzdWLMDAu"],"author_short":["Peralta, B.","Soto, A."],"bibbaseid":"peralta-soto-embeddedlocalfeatureselectionwithinmixtureofexperts-2014","bibdata":{"bibtype":"article","type":"article","author":[{"firstnames":["B."],"propositions":[],"lastnames":["Peralta"],"suffixes":[]},{"firstnames":["A."],"propositions":[],"lastnames":["Soto"],"suffixes":[]}],"title":"Embedded local feature selection within mixture of experts","journal":"Information Sciences","volume":"269","pages":"176-187","year":"2014","abstract":"A useful strategy to deal with complex classification scenarios is the divide and conquer approach. The mixture of experts (MoE) technique makes use of this strategy by jointly training a set of classifiers, or experts, that are specialized in different regions of the input space. A global model, or gate function, complements the experts by learning a function that weighs their relevance in different parts of the input space. Local feature selection appears as an attractive alternative to improve the specialization of experts and gate function, particularly, in the case of high dimensional data. In general, subsets of dimensions, or subspaces, are usually more appropriate to classify instances located in different regions of the input space. Accordingly, this work contributes with a regularized variant of MoE that incorporates an embedded process for local feature selection using L1-regularization. Experiments using artificial and real-world datasets provide evidence that the proposed method improves the classical MoE technique, in terms of accuracy and sparseness of the solution. Furthermore, our results indicate that the advantages of the proposed technique increase with the dimensionality of the data.","url":"http://saturno.ing.puc.cl/media/papers_alvaro/RMoE.pdf","bibtex":"@Article{\t peralta:soto:2014,\n author\t= {B. Peralta and A. Soto},\n title\t\t= {Embedded local feature selection within mixture of\n\t\t experts},\n journal\t= {Information Sciences},\n volume\t= {269},\n pages\t\t= {176-187},\n year\t\t= {2014},\n abstract\t= {A useful strategy to deal with complex classification\n\t\t scenarios is the divide and conquer approach. The mixture\n\t\t of experts (MoE) technique makes use of this strategy by\n\t\t jointly training a set of classifiers, or experts, that are\n\t\t specialized in different regions of the input space. A\n\t\t global model, or gate function, complements the experts by\n\t\t learning a function that weighs their relevance in\n\t\t different parts of the input space. Local feature selection\n\t\t appears as an attractive alternative to improve the\n\t\t specialization of experts and gate function, particularly,\n\t\t in the case of high dimensional data. In general, subsets\n\t\t of dimensions, or subspaces, are usually more appropriate\n\t\t to classify instances located in different regions of the\n\t\t input space. Accordingly, this work contributes with a\n\t\t regularized variant of MoE that incorporates an embedded\n\t\t process for local feature selection using\n\t\t L1-regularization. Experiments using artificial and\n\t\t real-world datasets provide evidence that the proposed\n\t\t method improves the classical MoE technique, in terms of\n\t\t accuracy and sparseness of the solution. Furthermore, our\n\t\t results indicate that the advantages of the proposed\n\t\t technique increase with the dimensionality of the data.},\n url\t\t= {http://saturno.ing.puc.cl/media/papers_alvaro/RMoE.pdf}\n}\n\n","author_short":["Peralta, B.","Soto, A."],"key":"peralta:soto:2014","id":"peralta:soto:2014","bibbaseid":"peralta-soto-embeddedlocalfeatureselectionwithinmixtureofexperts-2014","role":"author","urls":{"Paper":"http://saturno.ing.puc.cl/media/papers_alvaro/RMoE.pdf"},"metadata":{"authorlinks":{"soto, a":"https://asoto.ing.puc.cl/publications/"}},"downloads":9},"bibtype":"article","biburl":"https://raw.githubusercontent.com/ialab-puc/ialab.ing.puc.cl/master/pubs.bib","downloads":9,"keywords":[],"search_terms":["embedded","local","feature","selection","within","mixture","experts","peralta","soto"],"title":"Embedded local feature selection within mixture of experts","year":2014,"dataSources":["3YPRCmmijLqF4qHXd","sg6yZ29Z2xB5xP79R","sj4fjnZAPkEeYdZqL","m8qFBfFbjk9qWjcmJ","QjT2DEZoWmQYxjHXS"]}