Gradient-Based Training of Gaussian Mixture Models for High-Dimensional Streaming Data. Gepperth, A. & Pfülb, B. Neural Processing Letters, 53(6):4331-4348, Springer, 12, 2021. Paper Website doi abstract bibtex We present an approach for efficiently training Gaussian Mixture Model (GMM) by Stochastic Gradient Descent (SGD) with non-stationary, high-dimensional streaming data. Our training scheme does not require data-driven parameter initialization (e.g., k-means) and can thus be trained based on a random initial state. Furthermore, the approach allows mini-batch sizes as low as 1, which are typical for streaming-data settings. Major problems in such settings are undesirable local optima during early training phases and numerical instabilities due to high data dimensionalities. We introduce an adaptive annealing procedure to address the first problem, whereas numerical instabilities are eliminated by an exponential-free approximation to the standard GMM log-likelihood. Experiments on a variety of visual and non-visual benchmarks show that our SGD approach can be trained completely without, for instance, k-means based centroid initialization. It also compares favorably to an online variant of Expectation-Maximization (EM)—stochastic EM (sEM), which it outperforms by a large margin for very high-dimensional data.
@article{
title = {Gradient-Based Training of Gaussian Mixture Models for High-Dimensional Streaming Data},
type = {article},
year = {2021},
keywords = {Gaussian Mixture Model,High-Dimensional Streaming Data,Stochastic Gradient Descent},
pages = {4331-4348},
volume = {53},
websites = {https://link.springer.com/article/10.1007/s11063-021-10599-3},
month = {12},
publisher = {Springer},
day = {1},
id = {e77c7b3a-94fd-3e49-81ad-20394c5c2436},
created = {2024-10-14T09:33:04.263Z},
accessed = {2024-10-14},
file_attached = {true},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2024-10-14T09:33:30.254Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {df28411a-ed7f-4991-8358-d39685eb4bf0},
private_publication = {false},
abstract = {We present an approach for efficiently training Gaussian Mixture Model (GMM) by Stochastic Gradient Descent (SGD) with non-stationary, high-dimensional streaming data. Our training scheme does not require data-driven parameter initialization (e.g., k-means) and can thus be trained based on a random initial state. Furthermore, the approach allows mini-batch sizes as low as 1, which are typical for streaming-data settings. Major problems in such settings are undesirable local optima during early training phases and numerical instabilities due to high data dimensionalities. We introduce an adaptive annealing procedure to address the first problem, whereas numerical instabilities are eliminated by an exponential-free approximation to the standard GMM log-likelihood. Experiments on a variety of visual and non-visual benchmarks show that our SGD approach can be trained completely without, for instance, k-means based centroid initialization. It also compares favorably to an online variant of Expectation-Maximization (EM)—stochastic EM (sEM), which it outperforms by a large margin for very high-dimensional data.},
bibtype = {article},
author = {Gepperth, Alexander and Pfülb, Benedikt},
doi = {10.1007/S11063-021-10599-3/FIGURES/4},
journal = {Neural Processing Letters},
number = {6}
}
Downloads: 0
{"_id":"nGMGtbtJYMZJskcq5","bibbaseid":"gepperth-pflb-gradientbasedtrainingofgaussianmixturemodelsforhighdimensionalstreamingdata-2021","author_short":["Gepperth, A.","Pfülb, B."],"bibdata":{"title":"Gradient-Based Training of Gaussian Mixture Models for High-Dimensional Streaming Data","type":"article","year":"2021","keywords":"Gaussian Mixture Model,High-Dimensional Streaming Data,Stochastic Gradient Descent","pages":"4331-4348","volume":"53","websites":"https://link.springer.com/article/10.1007/s11063-021-10599-3","month":"12","publisher":"Springer","day":"1","id":"e77c7b3a-94fd-3e49-81ad-20394c5c2436","created":"2024-10-14T09:33:04.263Z","accessed":"2024-10-14","file_attached":"true","profile_id":"f1f70cad-e32d-3de2-a3c0-be1736cb88be","group_id":"5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1","last_modified":"2024-10-14T09:33:30.254Z","read":false,"starred":false,"authored":false,"confirmed":false,"hidden":false,"folder_uuids":"df28411a-ed7f-4991-8358-d39685eb4bf0","private_publication":false,"abstract":"We present an approach for efficiently training Gaussian Mixture Model (GMM) by Stochastic Gradient Descent (SGD) with non-stationary, high-dimensional streaming data. Our training scheme does not require data-driven parameter initialization (e.g., k-means) and can thus be trained based on a random initial state. Furthermore, the approach allows mini-batch sizes as low as 1, which are typical for streaming-data settings. Major problems in such settings are undesirable local optima during early training phases and numerical instabilities due to high data dimensionalities. We introduce an adaptive annealing procedure to address the first problem, whereas numerical instabilities are eliminated by an exponential-free approximation to the standard GMM log-likelihood. Experiments on a variety of visual and non-visual benchmarks show that our SGD approach can be trained completely without, for instance, k-means based centroid initialization. It also compares favorably to an online variant of Expectation-Maximization (EM)—stochastic EM (sEM), which it outperforms by a large margin for very high-dimensional data.","bibtype":"article","author":"Gepperth, Alexander and Pfülb, Benedikt","doi":"10.1007/S11063-021-10599-3/FIGURES/4","journal":"Neural Processing Letters","number":"6","bibtex":"@article{\n title = {Gradient-Based Training of Gaussian Mixture Models for High-Dimensional Streaming Data},\n type = {article},\n year = {2021},\n keywords = {Gaussian Mixture Model,High-Dimensional Streaming Data,Stochastic Gradient Descent},\n pages = {4331-4348},\n volume = {53},\n websites = {https://link.springer.com/article/10.1007/s11063-021-10599-3},\n month = {12},\n publisher = {Springer},\n day = {1},\n id = {e77c7b3a-94fd-3e49-81ad-20394c5c2436},\n created = {2024-10-14T09:33:04.263Z},\n accessed = {2024-10-14},\n file_attached = {true},\n profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},\n group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},\n last_modified = {2024-10-14T09:33:30.254Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {false},\n hidden = {false},\n folder_uuids = {df28411a-ed7f-4991-8358-d39685eb4bf0},\n private_publication = {false},\n abstract = {We present an approach for efficiently training Gaussian Mixture Model (GMM) by Stochastic Gradient Descent (SGD) with non-stationary, high-dimensional streaming data. Our training scheme does not require data-driven parameter initialization (e.g., k-means) and can thus be trained based on a random initial state. Furthermore, the approach allows mini-batch sizes as low as 1, which are typical for streaming-data settings. Major problems in such settings are undesirable local optima during early training phases and numerical instabilities due to high data dimensionalities. We introduce an adaptive annealing procedure to address the first problem, whereas numerical instabilities are eliminated by an exponential-free approximation to the standard GMM log-likelihood. Experiments on a variety of visual and non-visual benchmarks show that our SGD approach can be trained completely without, for instance, k-means based centroid initialization. It also compares favorably to an online variant of Expectation-Maximization (EM)—stochastic EM (sEM), which it outperforms by a large margin for very high-dimensional data.},\n bibtype = {article},\n author = {Gepperth, Alexander and Pfülb, Benedikt},\n doi = {10.1007/S11063-021-10599-3/FIGURES/4},\n journal = {Neural Processing Letters},\n number = {6}\n}","author_short":["Gepperth, A.","Pfülb, B."],"urls":{"Paper":"https://bibbase.org/service/mendeley/bfbbf840-4c42-3914-a463-19024f50b30c/file/316c1bf4-cd53-c337-607b-734fa4acbb4a/full_text.pdf.pdf","Website":"https://link.springer.com/article/10.1007/s11063-021-10599-3"},"biburl":"https://bibbase.org/service/mendeley/bfbbf840-4c42-3914-a463-19024f50b30c","bibbaseid":"gepperth-pflb-gradientbasedtrainingofgaussianmixturemodelsforhighdimensionalstreamingdata-2021","role":"author","keyword":["Gaussian Mixture Model","High-Dimensional Streaming Data","Stochastic Gradient Descent"],"metadata":{"authorlinks":{}},"downloads":0},"bibtype":"article","biburl":"https://bibbase.org/service/mendeley/bfbbf840-4c42-3914-a463-19024f50b30c","dataSources":["2252seNhipfTmjEBQ"],"keywords":["gaussian mixture model","high-dimensional streaming data","stochastic gradient descent"],"search_terms":["gradient","based","training","gaussian","mixture","models","high","dimensional","streaming","data","gepperth","pfülb"],"title":"Gradient-Based Training of Gaussian Mixture Models for High-Dimensional Streaming Data","year":2021}