Learning multiple layers of representation. Hinton, G. E. Trends in Cognitive Sciences, 2007.
Learning multiple layers of representation [pdf]Pdf  abstract   bibtex   1 download  
To achieve its' impressive performance at tasks such as speech or object recognition, the brain extracts multiple levels of representation from the sensory input. Backpropagation was the first computation- ally efficient model of how neural networks could learn multiple layers of representation, but it required labeled training data and it did not work well in deep networks. The limitations of backpropagation learn- ing can now be overcome by using multi-layer neural networks that contain top-down connections and training them to generate sensory data rather than to classify it. Learning multilayer generative models appears to be difficult, but a recent discovery makes it easy to learn non-linear, distributed representations one layer at a time. The mul- tiple layers of representation learned in this way can subsequently be fine-tuned to produce generative or discriminative models that work much better than previous approaches.

Downloads: 1