{"_id":{"_str":"534259580e946d920a000a70"},"__v":1,"authorIDs":["2ACfCTBEv4pRPLwBb","4DQrsTmafKuPbvKom","5457dd852abc8e9f3700082c","5de76c4f179cbdde01000135","5de7b92fbc280fdf01000192","5de7e861c8f9f6df01000188","5de7ff309b61e8de0100005f","5de917d35d589edf01000025","5de93bf8b8c3f8de010000a3","5de95819d574c6de010000d5","5de96615d574c6de010001ab","5de9faf7fac96fde01000039","5dea1112fac96fde01000194","5deb75f49e04d1df010000c8","5deb8542b62591df0100002d","5deb946fb62591df010000ef","5decb37d93ac84df01000108","5dece9a3619535de010000f9","5dee20da584fb4df0100023f","5dee5ebb773914de01000077","5dee6b12773914de0100015a","5deea5af0ceb4cdf01000193","5deee4cc66e59ade01000133","5def23c6e83f7dde0100003c","5def2e39e83f7dde010000a6","5def601cfe2024de01000084","5defdd35090769df01000181","5df0938cf651f5df01000056","5df0980df651f5df010000a2","5df0c74096fa76de01000024","5df0eda045b054df010000fb","5df2008fe4cb4ede01000035","5df2583563aac8df010000ad","5df25ae963aac8df010000dd","5df28978cf8320de0100001f","5df3756223fb6fdf010000fe","5df38d112b1f8ade01000086","5df3f9cad1756cdf01000039","5df4ca0755b997de0100009a","5df4cd8055b997de010000c2","5df53e56fd245cde01000125","5df60b78a37a40df01000156","5df62fce38e915de0100004b","5df6491ddf30fcdf0100003d","5df67503797ba9de01000104","5df6983872bbd4df01000160","5df6b0e031a37ade01000178","5df789d35c8a36df010000f7","5df7c23392a8e4df010000da","5df7dafbdc100cde010000e1","5df7e65edc100cde010001c6","5df89d4010b1d1de01000088","5df8b0cee6b510df01000021","5df93745d04b27df01000185","5df9d77138a7afde01000084","5dfa483ced5baede0100011b","5dfa67a37d1403df01000123","5dfbc3f34705b7de01000022","5dfcc5cc7a3608de0100004f","5dfe49bfbfbabdde01000004","5e1dc9478d71ddde0100015d","5e29d9d0888177df0100011e","5e48c117f1ed39de0100008d","5e555c0ee89e5fde010000e6","5e55fa1c819fabdf0100003a","5e5b04db6e568ade0100001f","5hGMdsfN7BrXW6K8T","5vmPz2jJcYQdtZPiZ","6yoSqPPyPrLdz8e5Q","BYkXaBeGZENiggkom","Bm98SYMoSNDbYwKGj","EsmZfHTQHAoi4zrJ2","N6cuxqTfG9ybhWDqZ","PXRdnhZs2CXY9NLhX","Q7zrKooGeSy8NTBjC","QxWxCp32GcmNqJ9K2","WnMtdN4pbnNcAtJ9C","e3ZEg6YfZmhHyjxdZ","exw99o2vqr9d3BXtB","fnGMsMDrpkcjCLZ5X","gN5Lfqjgx8P4c7HJT","gxtJ9RRRnpW2hQdtv","hCHC3WLvySqxwH4eZ","jN4BRAzEpDg6bmHmM","mBpuinLcpSzpxcFaz","n3Tju5NZ6trek5XEM","n3hXojCsQTaqGTPyY","ovEhxZqGLG9hGfrun","rnZ6cT67qkowNdLgz","u6Fai3nvyHwLKZpPn","vcz5Swk9goZXRki2G","x9kDqsoXq57J2bEu5","xmZk6XEacSsFbo2Sy","xufS6EqKGDqRQs47H"],"author_short":["Hinton, G. E."],"bibbaseid":"hinton-learningtorepresentvisualinput-2010","bibdata":{"bibtype":"article","type":"article","abstract":"10.1098/rstb.2009.0200 One of the central problems in computational neuroscience is to understand how the object-recognition pathway of the cortex learns a deep hierarchy of nonlinear feature detectors. Recent progress in machine learning shows that it is possible to learn deep hierarchies without requiring any labelled data. The feature detectors are learned one layer at a time and the goal of the learning procedure is to form a good generative model of images, not to predict the class of each image. The learning procedure only requires the pairwise correlations between the activations of neuron-like processing units in adjacent layers. The original version of the learning procedure is derived from a quadratic âenergyâ function but it can be extended to allow third-order, multiplicative interactions in which neurons gate the pairwise interactions between other neurons. A technique for factoring the third-order interactions leads to a learning module that again has a simple learning rule based on pairwise correlations. This module looks remarkably like modules that have been proposed by both biologists trying to explain the responses of neurons and engineers trying to create systems that can recognize objects.","added-at":"2010-11-30T22:39:03.000+0100","author":[{"propositions":[],"lastnames":["Hinton"],"firstnames":["Geoffrey","E."],"suffixes":[]}],"biburl":"http://www.bibsonomy.org/bibtex/2d6b206f4e844cddf8c2350da9a05edcb/smatthiesen","citeulike-article-id":"6377969","citeulike-linkout-0":"http://dx.doi.org/10.1098/rstb.2009.0200","citeulike-linkout-1":"http://rstb.royalsocietypublishing.org/content/365/1537/177.abstract","citeulike-linkout-2":"http://rstb.royalsocietypublishing.org/content/365/1537/177.full.pdf","day":"12","doi":"10.1098/rstb.2009.0200","interhash":"8202ab76a45c599dec3f83ecf63e5534","intrahash":"d6b206f4e844cddf8c2350da9a05edcb","journal":"Philosophical Transactions of the Royal Society B: Biological Sciences","keywords":"object recognition","month":"January","number":"1537","pages":"177–184","posted-at":"2009-12-15 08:37:24","priority":"2","title":"Learning to represent visual input","url_link":"http://dx.doi.org/10.1098/rstb.2009.0200","volume":"365","year":"2010","url_pdf":"absps/Philtrans2010.pdf","bibtex":"@Article{\t citeulike:6377969,\n abstract\t= {{10.1098/rstb.2009.0200 One of the central problems in\n\t\t computational neuroscience is to understand how the\n\t\t object-recognition pathway of the cortex learns a deep\n\t\t hierarchy of nonlinear feature detectors. Recent progress\n\t\t in machine learning shows that it is possible to learn deep\n\t\t hierarchies without requiring any labelled data. The\n\t\t feature detectors are learned one layer at a time and the\n\t\t goal of the learning procedure is to form a good generative\n\t\t model of images, not to predict the class of each image.\n\t\t The learning procedure only requires the pairwise\n\t\t correlations between the activations of neuron-like\n\t\t processing units in adjacent layers. The original version\n\t\t of the learning procedure is derived from a quadratic\n\t\t âenergyâ function but it can be extended to allow\n\t\t third-order, multiplicative interactions in which neurons\n\t\t gate the pairwise interactions between other neurons. A\n\t\t technique for factoring the third-order interactions leads\n\t\t to a learning module that again has a simple learning rule\n\t\t based on pairwise correlations. This module looks\n\t\t remarkably like modules that have been proposed by both\n\t\t biologists trying to explain the responses of neurons and\n\t\t engineers trying to create systems that can recognize\n\t\t objects.}},\n added-at\t= {2010-11-30T22:39:03.000+0100},\n author\t= {Hinton, Geoffrey E.},\n biburl\t= {http://www.bibsonomy.org/bibtex/2d6b206f4e844cddf8c2350da9a05edcb/smatthiesen}\n\t\t ,\n citeulike-article-id={6377969},\n citeulike-linkout-0={http://dx.doi.org/10.1098/rstb.2009.0200},\n citeulike-linkout-1={http://rstb.royalsocietypublishing.org/content/365/1537/177.abstract}\n\t\t ,\n citeulike-linkout-2={http://rstb.royalsocietypublishing.org/content/365/1537/177.full.pdf}\n\t\t ,\n day\t\t= {12},\n doi\t\t= {10.1098/rstb.2009.0200},\n interhash\t= {8202ab76a45c599dec3f83ecf63e5534},\n intrahash\t= {d6b206f4e844cddf8c2350da9a05edcb},\n journal\t= {Philosophical Transactions of the Royal Society B:\n\t\t Biological Sciences},\n keywords\t= {object recognition},\n month\t\t= {January},\n number\t= {1537},\n pages\t\t= {177--184},\n posted-at\t= {2009-12-15 08:37:24},\n priority\t= {2},\n title\t\t= {{Learning to represent visual input}},\n url_link\t= {http://dx.doi.org/10.1098/rstb.2009.0200},\n volume\t= {365},\n year\t\t= {2010},\n url_pdf\t= {absps/Philtrans2010.pdf}\n}\n\n","author_short":["Hinton, G. E."],"key":"citeulike:6377969","id":"citeulike:6377969","bibbaseid":"hinton-learningtorepresentvisualinput-2010","role":"author","urls":{" link":"http://dx.doi.org/10.1098/rstb.2009.0200"," pdf":"www.cs.toronto.edu/~fritz/absps/Philtrans2010.pdf"},"keyword":["object recognition"],"metadata":{"authorlinks":{"hinton, g":"https://bibbase.org/show?bib=www.cs.toronto.edu/~fritz/master3.bib&theme=side"}},"downloads":1,"html":""},"bibtype":"article","biburl":"www.cs.toronto.edu/~fritz/master3.bib","downloads":1,"keywords":["object recognition"],"search_terms":["learning","represent","visual","input","hinton"],"title":"Learning to represent visual input","year":2010,"dataSources":["avdRdTCKoXoyxo2tQ","GtChgCdrAm62yoP3L"]}