Orthogonal procrustes analysis for dictionary learning in sparse linear representation. Grossi, G., Lanzarotti, R., & Lin, J. PLoS ONE, 12(1):1-16, 2017. Paper doi abstract bibtex In the sparse representation model, the design of overcomplete dictionaries plays a key role for the effectiveness and applicability in different domains. Recent research has produced several dictionary learning approaches, being proven that dictionaries learnt by data examples significantly outperform structured ones, e.g. wavelet transforms. In this context, learning consists in adapting the dictionary atoms to a set of training signals in order to promote a sparse representation that minimizes the reconstruction error. Finding the best fitting dictionary remains a very difficult task, leaving the question still open. A well-established heuristic method for tackling this problem is an iterative alternating scheme, adopted for instance in the well-known K-SVD algorithm. Essentially, it consists in repeating two stages; the former promotes sparse coding of the training set and the latter adapts the dictionary to reduce the error. In this paper we present R-SVD, a new method that, while maintaining the alternating scheme, adopts the Orthogonal Procrustes analysis to update the dictionary atoms suitably arranged into groups. Comparative experiments on synthetic data prove the effectiveness of R-SVD with respect to well known dictionary learning algorithms such as K-SVD, ILS-DLA and the online method OSDL. Moreover, experiments on natural data such as ECG compression, EEG sparse representation, and image modeling confirm R-SVD's robustness and wide applicability.;
@article{
title = {Orthogonal procrustes analysis for dictionary learning in sparse linear representation},
type = {article},
year = {2017},
pages = {1-16},
volume = {12},
id = {02133019-a277-387a-8515-f11810a03922},
created = {2018-04-23T03:55:47.677Z},
file_attached = {true},
profile_id = {6bce6ab9-03b5-36ad-a474-26e482dc52c3},
last_modified = {2018-04-23T03:59:05.466Z},
read = {false},
starred = {false},
authored = {true},
confirmed = {true},
hidden = {false},
private_publication = {false},
abstract = {In the sparse representation model, the design of overcomplete dictionaries plays a key role for the effectiveness and applicability in different domains. Recent research has produced several dictionary learning approaches, being proven that dictionaries learnt by data examples significantly outperform structured ones, e.g. wavelet transforms. In this context, learning consists in adapting the dictionary atoms to a set of training signals in order to promote a sparse representation that minimizes the reconstruction error. Finding the best fitting dictionary remains a very difficult task, leaving the question still open. A well-established heuristic method for tackling this problem is an iterative alternating scheme, adopted for instance in the well-known K-SVD algorithm. Essentially, it consists in repeating two stages; the former promotes sparse coding of the training set and the latter adapts the dictionary to reduce the error. In this paper we present R-SVD, a new method that, while maintaining the alternating scheme, adopts the Orthogonal Procrustes analysis to update the dictionary atoms suitably arranged into groups. Comparative experiments on synthetic data prove the effectiveness of R-SVD with respect to well known dictionary learning algorithms such as K-SVD, ILS-DLA and the online method OSDL. Moreover, experiments on natural data such as ECG compression, EEG sparse representation, and image modeling confirm R-SVD's robustness and wide applicability.;},
bibtype = {article},
author = {Grossi, Giuliano and Lanzarotti, Raffaella and Lin, Jianyi},
doi = {10.1371/journal.pone.0169663},
journal = {PLoS ONE},
number = {1}
}
Downloads: 0
{"_id":"2N8insYDywg7CXQLS","bibbaseid":"grossi-lanzarotti-lin-orthogonalprocrustesanalysisfordictionarylearninginsparselinearrepresentation-2017","authorIDs":["EF82ixfaSgYFdM2FT"],"author_short":["Grossi, G.","Lanzarotti, R.","Lin, J."],"bibdata":{"title":"Orthogonal procrustes analysis for dictionary learning in sparse linear representation","type":"article","year":"2017","pages":"1-16","volume":"12","id":"02133019-a277-387a-8515-f11810a03922","created":"2018-04-23T03:55:47.677Z","file_attached":"true","profile_id":"6bce6ab9-03b5-36ad-a474-26e482dc52c3","last_modified":"2018-04-23T03:59:05.466Z","read":false,"starred":false,"authored":"true","confirmed":"true","hidden":false,"private_publication":false,"abstract":"In the sparse representation model, the design of overcomplete dictionaries plays a key role for the effectiveness and applicability in different domains. Recent research has produced several dictionary learning approaches, being proven that dictionaries learnt by data examples significantly outperform structured ones, e.g. wavelet transforms. In this context, learning consists in adapting the dictionary atoms to a set of training signals in order to promote a sparse representation that minimizes the reconstruction error. Finding the best fitting dictionary remains a very difficult task, leaving the question still open. A well-established heuristic method for tackling this problem is an iterative alternating scheme, adopted for instance in the well-known K-SVD algorithm. Essentially, it consists in repeating two stages; the former promotes sparse coding of the training set and the latter adapts the dictionary to reduce the error. In this paper we present R-SVD, a new method that, while maintaining the alternating scheme, adopts the Orthogonal Procrustes analysis to update the dictionary atoms suitably arranged into groups. Comparative experiments on synthetic data prove the effectiveness of R-SVD with respect to well known dictionary learning algorithms such as K-SVD, ILS-DLA and the online method OSDL. Moreover, experiments on natural data such as ECG compression, EEG sparse representation, and image modeling confirm R-SVD's robustness and wide applicability.;","bibtype":"article","author":"Grossi, Giuliano and Lanzarotti, Raffaella and Lin, Jianyi","doi":"10.1371/journal.pone.0169663","journal":"PLoS ONE","number":"1","bibtex":"@article{\n title = {Orthogonal procrustes analysis for dictionary learning in sparse linear representation},\n type = {article},\n year = {2017},\n pages = {1-16},\n volume = {12},\n id = {02133019-a277-387a-8515-f11810a03922},\n created = {2018-04-23T03:55:47.677Z},\n file_attached = {true},\n profile_id = {6bce6ab9-03b5-36ad-a474-26e482dc52c3},\n last_modified = {2018-04-23T03:59:05.466Z},\n read = {false},\n starred = {false},\n authored = {true},\n confirmed = {true},\n hidden = {false},\n private_publication = {false},\n abstract = {In the sparse representation model, the design of overcomplete dictionaries plays a key role for the effectiveness and applicability in different domains. Recent research has produced several dictionary learning approaches, being proven that dictionaries learnt by data examples significantly outperform structured ones, e.g. wavelet transforms. In this context, learning consists in adapting the dictionary atoms to a set of training signals in order to promote a sparse representation that minimizes the reconstruction error. Finding the best fitting dictionary remains a very difficult task, leaving the question still open. A well-established heuristic method for tackling this problem is an iterative alternating scheme, adopted for instance in the well-known K-SVD algorithm. Essentially, it consists in repeating two stages; the former promotes sparse coding of the training set and the latter adapts the dictionary to reduce the error. In this paper we present R-SVD, a new method that, while maintaining the alternating scheme, adopts the Orthogonal Procrustes analysis to update the dictionary atoms suitably arranged into groups. Comparative experiments on synthetic data prove the effectiveness of R-SVD with respect to well known dictionary learning algorithms such as K-SVD, ILS-DLA and the online method OSDL. Moreover, experiments on natural data such as ECG compression, EEG sparse representation, and image modeling confirm R-SVD's robustness and wide applicability.;},\n bibtype = {article},\n author = {Grossi, Giuliano and Lanzarotti, Raffaella and Lin, Jianyi},\n doi = {10.1371/journal.pone.0169663},\n journal = {PLoS ONE},\n number = {1}\n}","author_short":["Grossi, G.","Lanzarotti, R.","Lin, J."],"urls":{"Paper":"https://bibbase.org/service/mendeley/6bce6ab9-03b5-36ad-a474-26e482dc52c3/file/7b4dc6aa-2155-948b-179c-7bd3997fd034/Pubblicazione_1.pdf.pdf"},"biburl":"https://bibbase.org/service/mendeley/6bce6ab9-03b5-36ad-a474-26e482dc52c3","bibbaseid":"grossi-lanzarotti-lin-orthogonalprocrustesanalysisfordictionarylearninginsparselinearrepresentation-2017","role":"author","metadata":{"authorlinks":{}},"downloads":0},"bibtype":"article","creationDate":"2020-05-28T13:54:21.130Z","downloads":0,"keywords":[],"search_terms":["orthogonal","procrustes","analysis","dictionary","learning","sparse","linear","representation","grossi","lanzarotti","lin"],"title":"Orthogonal procrustes analysis for dictionary learning in sparse linear representation","year":2017,"biburl":"https://bibbase.org/service/mendeley/6bce6ab9-03b5-36ad-a474-26e482dc52c3","dataSources":["KrZZ3KEZ3zvS84wws","ya2CyA73rpZseyrZ8","2252seNhipfTmjEBQ"]}