A Quadratically Convergent Proximal Algorithm For Nonnegative Tensor Decomposition. Vervliet, N., Themelis, A., Patrinos, P., & De Lathauwer, L. In 2020 28th European Signal Processing Conference (EUSIPCO), pages 1020-1024, Aug, 2020. Paper doi abstract bibtex The decomposition of tensors into simple rank-1 terms is key in a variety of applications in signal processing, data analysis and machine learning. While this canonical polyadic decomposition (CPD) is unique under mild conditions, including prior knowledge such as nonnegativity can facilitate interpretation of the components. Inspired by the effectiveness and efficiency of Gauss-Newton (GN) for unconstrained CPD, we derive a proximal, semismooth GN type algorithm for non-negative tensor factorization. Global convergence to local minima is achieved via backtracking on the forward-backward envelope function. If the algorithm converges to a global optimum, we show that Q-quadratic rates are obtained in the exact case. Such fast rates are verified experimentally, and we illustrate that using the GN step significantly reduces number of (expensive) gradient computations compared to proximal gradient descent.
@InProceedings{9287549,
author = {N. Vervliet and A. Themelis and P. Patrinos and L. {De Lathauwer}},
booktitle = {2020 28th European Signal Processing Conference (EUSIPCO)},
title = {A Quadratically Convergent Proximal Algorithm For Nonnegative Tensor Decomposition},
year = {2020},
pages = {1020-1024},
abstract = {The decomposition of tensors into simple rank-1 terms is key in a variety of applications in signal processing, data analysis and machine learning. While this canonical polyadic decomposition (CPD) is unique under mild conditions, including prior knowledge such as nonnegativity can facilitate interpretation of the components. Inspired by the effectiveness and efficiency of Gauss-Newton (GN) for unconstrained CPD, we derive a proximal, semismooth GN type algorithm for non-negative tensor factorization. Global convergence to local minima is achieved via backtracking on the forward-backward envelope function. If the algorithm converges to a global optimum, we show that Q-quadratic rates are obtained in the exact case. Such fast rates are verified experimentally, and we illustrate that using the GN step significantly reduces number of (expensive) gradient computations compared to proximal gradient descent.},
keywords = {Tensors;Machine learning algorithms;Data analysis;Signal processing algorithms;Europe;Machine learning;Signal processing;nonnegative tensor factorization;canonical polyadic decomposition;proximal methods;Gauss-Newton},
doi = {10.23919/Eusipco47968.2020.9287549},
issn = {2076-1465},
month = {Aug},
url = {https://www.eurasip.org/proceedings/eusipco/eusipco2020/pdfs/0001020.pdf},
}
Downloads: 0
{"_id":"GTYme7pFJXKGfBhrh","bibbaseid":"vervliet-themelis-patrinos-delathauwer-aquadraticallyconvergentproximalalgorithmfornonnegativetensordecomposition-2020","authorIDs":[],"author_short":["Vervliet, N.","Themelis, A.","Patrinos, P.","De Lathauwer, L."],"bibdata":{"bibtype":"inproceedings","type":"inproceedings","author":[{"firstnames":["N."],"propositions":[],"lastnames":["Vervliet"],"suffixes":[]},{"firstnames":["A."],"propositions":[],"lastnames":["Themelis"],"suffixes":[]},{"firstnames":["P."],"propositions":[],"lastnames":["Patrinos"],"suffixes":[]},{"firstnames":["L."],"propositions":[],"lastnames":["De Lathauwer"],"suffixes":[]}],"booktitle":"2020 28th European Signal Processing Conference (EUSIPCO)","title":"A Quadratically Convergent Proximal Algorithm For Nonnegative Tensor Decomposition","year":"2020","pages":"1020-1024","abstract":"The decomposition of tensors into simple rank-1 terms is key in a variety of applications in signal processing, data analysis and machine learning. While this canonical polyadic decomposition (CPD) is unique under mild conditions, including prior knowledge such as nonnegativity can facilitate interpretation of the components. Inspired by the effectiveness and efficiency of Gauss-Newton (GN) for unconstrained CPD, we derive a proximal, semismooth GN type algorithm for non-negative tensor factorization. Global convergence to local minima is achieved via backtracking on the forward-backward envelope function. If the algorithm converges to a global optimum, we show that Q-quadratic rates are obtained in the exact case. Such fast rates are verified experimentally, and we illustrate that using the GN step significantly reduces number of (expensive) gradient computations compared to proximal gradient descent.","keywords":"Tensors;Machine learning algorithms;Data analysis;Signal processing algorithms;Europe;Machine learning;Signal processing;nonnegative tensor factorization;canonical polyadic decomposition;proximal methods;Gauss-Newton","doi":"10.23919/Eusipco47968.2020.9287549","issn":"2076-1465","month":"Aug","url":"https://www.eurasip.org/proceedings/eusipco/eusipco2020/pdfs/0001020.pdf","bibtex":"@InProceedings{9287549,\n author = {N. Vervliet and A. Themelis and P. Patrinos and L. {De Lathauwer}},\n booktitle = {2020 28th European Signal Processing Conference (EUSIPCO)},\n title = {A Quadratically Convergent Proximal Algorithm For Nonnegative Tensor Decomposition},\n year = {2020},\n pages = {1020-1024},\n abstract = {The decomposition of tensors into simple rank-1 terms is key in a variety of applications in signal processing, data analysis and machine learning. While this canonical polyadic decomposition (CPD) is unique under mild conditions, including prior knowledge such as nonnegativity can facilitate interpretation of the components. Inspired by the effectiveness and efficiency of Gauss-Newton (GN) for unconstrained CPD, we derive a proximal, semismooth GN type algorithm for non-negative tensor factorization. Global convergence to local minima is achieved via backtracking on the forward-backward envelope function. If the algorithm converges to a global optimum, we show that Q-quadratic rates are obtained in the exact case. Such fast rates are verified experimentally, and we illustrate that using the GN step significantly reduces number of (expensive) gradient computations compared to proximal gradient descent.},\n keywords = {Tensors;Machine learning algorithms;Data analysis;Signal processing algorithms;Europe;Machine learning;Signal processing;nonnegative tensor factorization;canonical polyadic decomposition;proximal methods;Gauss-Newton},\n doi = {10.23919/Eusipco47968.2020.9287549},\n issn = {2076-1465},\n month = {Aug},\n url = {https://www.eurasip.org/proceedings/eusipco/eusipco2020/pdfs/0001020.pdf},\n}\n\n","author_short":["Vervliet, N.","Themelis, A.","Patrinos, P.","De Lathauwer, L."],"key":"9287549","id":"9287549","bibbaseid":"vervliet-themelis-patrinos-delathauwer-aquadraticallyconvergentproximalalgorithmfornonnegativetensordecomposition-2020","role":"author","urls":{"Paper":"https://www.eurasip.org/proceedings/eusipco/eusipco2020/pdfs/0001020.pdf"},"keyword":["Tensors;Machine learning algorithms;Data analysis;Signal processing algorithms;Europe;Machine learning;Signal processing;nonnegative tensor factorization;canonical polyadic decomposition;proximal methods;Gauss-Newton"],"metadata":{"authorlinks":{}},"downloads":0},"bibtype":"inproceedings","biburl":"https://raw.githubusercontent.com/Roznn/EUSIPCO/main/eusipco2020url.bib","creationDate":"2021-02-13T19:41:51.374Z","downloads":0,"keywords":["tensors;machine learning algorithms;data analysis;signal processing algorithms;europe;machine learning;signal processing;nonnegative tensor factorization;canonical polyadic decomposition;proximal methods;gauss-newton"],"search_terms":["quadratically","convergent","proximal","algorithm","nonnegative","tensor","decomposition","vervliet","themelis","patrinos","de lathauwer"],"title":"A Quadratically Convergent Proximal Algorithm For Nonnegative Tensor Decomposition","year":2020,"dataSources":["wXzutN6o5hxayPKdC","NBHz6C7PWuqwYyaqa"]}