Learning representations by back-propagating errors. Rumelhart, D. E, Hinton, G. E, & Williams, R. J Nature, 323(6088):533–536, 1986.
abstract   bibtex   2 downloads  
Describes back-propagation, which is a set of learning rules for neural networks. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal hidden units, not part of the input or output, come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from other learning procedures.
@Article{backprop,
  author   = {Rumelhart, David E and Hinton, Geoffrey E and Williams, Ronald J},
  journal  = {Nature},
  title    = {Learning representations by back-propagating errors.},
  year     = {1986},
  number   = {6088},
  pages    = {533--536},
  volume   = {323},
  abstract = {Describes back-propagation, which is a set of learning rules for
	neural networks. The procedure repeatedly adjusts the weights of
	the connections in the network so as to minimize a measure of the
	difference between the actual output vector of the net and the desired
	output vector. As a result of the weight adjustments, internal hidden
	units, not part of the input or output, come to represent important
	features of the task domain, and the regularities in the task are
	captured by the interactions of these units. The ability to create
	useful new features distinguishes back-propagation from other learning
	procedures.},
}

Downloads: 2