Mixture of Experts with Entropic Regularization for Data Classification. Peralta, B., Saavedra, A., Caro, L., & Soto, A. Entropy, 2019.
Mixture of Experts with Entropic Regularization for Data Classification [link]Paper  abstract   bibtex   
Today, there is growing interest in the automatic classification of a variety of tasks, such as weather forecasting, product recommendations, intrusion detection, and people recognition.“Mixture-of-experts” is a well-known classification technique; it is a probabilistic model consisting of local expert classifiers weighted by a gate network that is typically based on softmax functions, combined with learnable complex patterns in data. In this scheme, one data point is influenced by only one expert; as a result, the training process can be misguided in real datasets for which complex data need to be explained by multiple experts. In this work, we propose a variant of the regular mixture-of-experts model. In the proposed model, the cost classification is penalized by the Shannon entropy of the gating network in order to avoid a “winner-takes-all” output for the gating network. Experiments show the advantage of our approach using several real datasets, with improvements in mean accuracy of 3–6% in some datasets. In future work, we plan to embed feature selection into this model.
@article{Peralta:EtAl:2019,
  Author = {B. Peralta and A. Saavedra and L. Caro and A. Soto},
  Title = {Mixture of Experts with Entropic Regularization for Data Classification},
  Journal = {Entropy},
  Volume = {21},
  Number = {2},
  Year = {2019},
  abstract = {Today, there is growing interest in the automatic classification of a variety of tasks, such as weather forecasting, product recommendations, intrusion detection, and people recognition.“Mixture-of-experts” is a well-known classification technique; it is a probabilistic model consisting of local expert classifiers weighted by a gate network that is typically based on softmax functions, combined with learnable complex patterns in data. In this scheme, one data point is influenced by only one expert; as a result, the training process can be misguided in real datasets for which complex data need to be explained by multiple experts. In this work, we propose a variant of the regular mixture-of-experts model. In the proposed model, the cost classification is penalized by the Shannon entropy of the gating network in order to avoid a “winner-takes-all” output for the gating network. Experiments show the advantage of our approach using several real datasets, with improvements in mean accuracy of 3–6\% in some datasets. In future work, we plan to embed feature selection into this model.},
url = {https://www.mdpi.com/1099-4300/21/2/190}
}

Downloads: 0