Learning structured and non-redundant representations with deep neural networks. Yang, J., Xiong, W., Li, S., & Xu, C. Pattern Recognition, 86:224–235, February, 2019. Paper doi abstract bibtex This paper proposes a novel regularizer named Structured Decorrelation Constraint, to address both the generalization and optimization of deep neural networks, including multiple-layer perceptrons and convolutional neural networks. Our proposed regularizer reduces overfitting by breaking the co-adaptions between the neurons with an explicit penalty. As a result, the network is capable of learning non-redundant representations. Meanwhile, the proposed regularizer encourages the networks to learn structured high-level features to aid the networks’ optimization during training. To this end, neurons are constrained to behave obeying a group prior. Our regularizer applies to various types of layers, including fully connected layers, convolutional layers and normalization layers. The loss of our regularizer can be directly minimized along with the network’s classification loss by stochastic gradient descent. Experiments show that the proposed regularizer obviously relieves the overfitting problem of the existing deep networks. It yields much better performance on extensive datasets than the conventional regularizers like Dropout.
@article{yang_learning_2019,
title = {Learning structured and non-redundant representations with deep neural networks},
volume = {86},
issn = {0031-3203},
url = {https://www.sciencedirect.com/science/article/pii/S0031320318303169},
doi = {10.1016/j.patcog.2018.08.017},
abstract = {This paper proposes a novel regularizer named Structured Decorrelation Constraint, to address both the generalization and optimization of deep neural networks, including multiple-layer perceptrons and convolutional neural networks. Our proposed regularizer reduces overfitting by breaking the co-adaptions between the neurons with an explicit penalty. As a result, the network is capable of learning non-redundant representations. Meanwhile, the proposed regularizer encourages the networks to learn structured high-level features to aid the networks’ optimization during training. To this end, neurons are constrained to behave obeying a group prior. Our regularizer applies to various types of layers, including fully connected layers, convolutional layers and normalization layers. The loss of our regularizer can be directly minimized along with the network’s classification loss by stochastic gradient descent. Experiments show that the proposed regularizer obviously relieves the overfitting problem of the existing deep networks. It yields much better performance on extensive datasets than the conventional regularizers like Dropout.},
language = {en},
urldate = {2021-10-18},
journal = {Pattern Recognition},
author = {Yang, Jihai and Xiong, Wei and Li, Shijun and Xu, Chang},
month = feb,
year = {2019},
keywords = {Decorrelation, Deep networks, Overfitting},
pages = {224--235},
}
Downloads: 0
{"_id":"DdvQ8iNYH5n2pNuk8","bibbaseid":"yang-xiong-li-xu-learningstructuredandnonredundantrepresentationswithdeepneuralnetworks-2019","author_short":["Yang, J.","Xiong, W.","Li, S.","Xu, C."],"bibdata":{"bibtype":"article","type":"article","title":"Learning structured and non-redundant representations with deep neural networks","volume":"86","issn":"0031-3203","url":"https://www.sciencedirect.com/science/article/pii/S0031320318303169","doi":"10.1016/j.patcog.2018.08.017","abstract":"This paper proposes a novel regularizer named Structured Decorrelation Constraint, to address both the generalization and optimization of deep neural networks, including multiple-layer perceptrons and convolutional neural networks. Our proposed regularizer reduces overfitting by breaking the co-adaptions between the neurons with an explicit penalty. As a result, the network is capable of learning non-redundant representations. Meanwhile, the proposed regularizer encourages the networks to learn structured high-level features to aid the networks’ optimization during training. To this end, neurons are constrained to behave obeying a group prior. Our regularizer applies to various types of layers, including fully connected layers, convolutional layers and normalization layers. The loss of our regularizer can be directly minimized along with the network’s classification loss by stochastic gradient descent. Experiments show that the proposed regularizer obviously relieves the overfitting problem of the existing deep networks. It yields much better performance on extensive datasets than the conventional regularizers like Dropout.","language":"en","urldate":"2021-10-18","journal":"Pattern Recognition","author":[{"propositions":[],"lastnames":["Yang"],"firstnames":["Jihai"],"suffixes":[]},{"propositions":[],"lastnames":["Xiong"],"firstnames":["Wei"],"suffixes":[]},{"propositions":[],"lastnames":["Li"],"firstnames":["Shijun"],"suffixes":[]},{"propositions":[],"lastnames":["Xu"],"firstnames":["Chang"],"suffixes":[]}],"month":"February","year":"2019","keywords":"Decorrelation, Deep networks, Overfitting","pages":"224–235","bibtex":"@article{yang_learning_2019,\n\ttitle = {Learning structured and non-redundant representations with deep neural networks},\n\tvolume = {86},\n\tissn = {0031-3203},\n\turl = {https://www.sciencedirect.com/science/article/pii/S0031320318303169},\n\tdoi = {10.1016/j.patcog.2018.08.017},\n\tabstract = {This paper proposes a novel regularizer named Structured Decorrelation Constraint, to address both the generalization and optimization of deep neural networks, including multiple-layer perceptrons and convolutional neural networks. Our proposed regularizer reduces overfitting by breaking the co-adaptions between the neurons with an explicit penalty. As a result, the network is capable of learning non-redundant representations. Meanwhile, the proposed regularizer encourages the networks to learn structured high-level features to aid the networks’ optimization during training. To this end, neurons are constrained to behave obeying a group prior. Our regularizer applies to various types of layers, including fully connected layers, convolutional layers and normalization layers. The loss of our regularizer can be directly minimized along with the network’s classification loss by stochastic gradient descent. Experiments show that the proposed regularizer obviously relieves the overfitting problem of the existing deep networks. It yields much better performance on extensive datasets than the conventional regularizers like Dropout.},\n\tlanguage = {en},\n\turldate = {2021-10-18},\n\tjournal = {Pattern Recognition},\n\tauthor = {Yang, Jihai and Xiong, Wei and Li, Shijun and Xu, Chang},\n\tmonth = feb,\n\tyear = {2019},\n\tkeywords = {Decorrelation, Deep networks, Overfitting},\n\tpages = {224--235},\n}\n\n\n\n","author_short":["Yang, J.","Xiong, W.","Li, S.","Xu, C."],"key":"yang_learning_2019","id":"yang_learning_2019","bibbaseid":"yang-xiong-li-xu-learningstructuredandnonredundantrepresentationswithdeepneuralnetworks-2019","role":"author","urls":{"Paper":"https://www.sciencedirect.com/science/article/pii/S0031320318303169"},"keyword":["Decorrelation","Deep networks","Overfitting"],"metadata":{"authorlinks":{}},"html":""},"bibtype":"article","biburl":"https://bibbase.org/zotero/mh_lenguyen","dataSources":["XJ7Gu6aiNbAiJAjbw","XvjRDbrMBW2XJY3p9","3C6BKwtiX883bctx4","5THezwiL4FyF8mm4G","RktFJE9cDa98BRLZF","qpxPuYKLChgB7ox6D","PfM5iniYHEthCfQDH","iwKepCrWBps7ojhDx"],"keywords":["decorrelation","deep networks","overfitting"],"search_terms":["learning","structured","non","redundant","representations","deep","neural","networks","yang","xiong","li","xu"],"title":"Learning structured and non-redundant representations with deep neural networks","year":2019}