Neural networks, principal components, and subspaces. Oja, E. International Journal of Neural Systems, 01(01):61–68, January, 1989.
Neural networks, principal components, and subspaces [link]Paper  doi  abstract   bibtex   
A single neuron with Hebbian-type learning for the connection weights, and with nonlinear internal feedback, has been shown to extract the statistical principal components of its stationary input pattern sequence. A generalization of this model to a layer of neuron units is given, called the Subspace Network, which yields a multi-dimensional, principal component subspace. This can be used as an associative memory for the input vectors or as a module in nonsupervised learning of data clusters in the input space. It is also able to realize a powerful pattern classifier based on projections on class subspaces. Some classification results for natural textures are given.
@article{oja1989,
	title = {Neural networks, principal components, and subspaces},
	volume = {01},
	issn = {0129-0657},
	url = {https://www.worldscientific.com/doi/abs/10.1142/S0129065789000475},
	doi = {10.1142/S0129065789000475},
	abstract = {A single neuron with Hebbian-type learning for the connection weights, and with nonlinear internal feedback, has been shown to extract the statistical principal components of its stationary input pattern sequence. A generalization of this model to a layer of neuron units is given, called the Subspace Network, which yields a multi-dimensional, principal component subspace. This can be used as an associative memory for the input vectors or as a module in nonsupervised learning of data clusters in the input space. It is also able to realize a powerful pattern classifier based on projections on class subspaces. Some classification results for natural textures are given.},
	number = {01},
	urldate = {2023-09-21},
	journal = {International Journal of Neural Systems},
	author = {Oja, Erkki},
	month = jan,
	year = {1989},
	pages = {61--68},
}

Downloads: 0