Unsupervised Learning by Competing Hidden Units. Krotov, D. & Hopfield, J. J. 116(16):7723–7731.
Unsupervised Learning by Competing Hidden Units [link]Paper  doi  abstract   bibtex   
[Significance] Despite great success of deep learning a question remains to what extent the computational properties of deep neural networks are similar to those of the human brain. The particularly nonbiological aspect of deep learning is the supervised training process with the backpropagation algorithm, which requires massive amounts of labeled data, and a nonlocal learning rule for changing the synapse strengths. This paper describes a learning algorithm that does not suffer from these two problems. It learns the weights of the lower layer of neural networks in a completely unsupervised fashion. The entire algorithm utilizes local learning rules which have conceptual biological plausibility. [Abstract] It is widely believed that end-to-end training with the backpropagation algorithm is essential for learning good feature detectors in early layers of artificial neural networks, so that these detectors are useful for the task performed by the higher layers of that neural network. At the same time, the traditional form of backpropagation is biologically implausible. In the present paper we propose an unusual learning rule, which has a degree of biological plausibility and which is motivated by Hebb’s idea that change of the synapse strength should be local—i.e., should depend only on the activities of the pre- and postsynaptic neurons. We design a learning algorithm that utilizes global inhibition in the hidden layer and is capable of learning early feature detectors in a completely unsupervised way. These learned lower-layer feature detectors can be used to train higher-layer weights in a usual supervised way so that the performance of the full network is comparable to the performance of standard feedforward networks trained end-to-end with a backpropagation algorithm on simple tasks.
@article{krotovUnsupervisedLearningCompeting2019,
  title = {Unsupervised Learning by Competing Hidden Units},
  author = {Krotov, Dmitry and Hopfield, John J.},
  date = {2019-04-16},
  journaltitle = {Proceedings of the National Academy of Sciences},
  shortjournal = {PNAS},
  volume = {116},
  pages = {7723--7731},
  issn = {0027-8424, 1091-6490},
  doi = {10.1073/pnas.1820458116},
  url = {https://doi.org/10.1073/pnas.1820458116},
  urldate = {2019-04-18},
  abstract = {[Significance]
Despite great success of deep learning a question remains to what extent the computational properties of deep neural networks are similar to those of the human brain. The particularly nonbiological aspect of deep learning is the supervised training process with the backpropagation algorithm, which requires massive amounts of labeled data, and a nonlocal learning rule for changing the synapse strengths. This paper describes a learning algorithm that does not suffer from these two problems. It learns the weights of the lower layer of neural networks in a completely unsupervised fashion. The entire algorithm utilizes local learning rules which have conceptual biological plausibility.

[Abstract]
It is widely believed that end-to-end training with the backpropagation algorithm is essential for learning good feature detectors in early layers of artificial neural networks, so that these detectors are useful for the task performed by the higher layers of that neural network. At the same time, the traditional form of backpropagation is biologically implausible. In the present paper we propose an unusual learning rule, which has a degree of biological plausibility and which is motivated by Hebb’s idea that change of the synapse strength should be local—i.e., should depend only on the activities of the pre- and postsynaptic neurons. We design a learning algorithm that utilizes global inhibition in the hidden layer and is capable of learning early feature detectors in a completely unsupervised way. These learned lower-layer feature detectors can be used to train higher-layer weights in a usual supervised way so that the performance of the full network is comparable to the performance of standard feedforward networks trained end-to-end with a backpropagation algorithm on simple tasks.},
  eprint = {30926658},
  eprinttype = {pmid},
  keywords = {~INRMM-MiD:z-3D6W95X4,artificial-intelligence,artificial-neural-networks,back-propagation-networks,biological-deep-learning,computational-science,computational-science-automation,deep-learning,machine-learning,unsupervised-training},
  langid = {english},
  number = {16}
}

Downloads: 0