Transformation-invariant visual representations in self-organizing spiking neural networks. Evans, B. D. & Stringer, S. M Frontiers in Computational Neuroscience, 6:46, 2012.
Transformation-invariant visual representations in self-organizing spiking neural networks [link]Paper  doi  abstract   bibtex   
The ventral visual pathway achieves object and face recognition by building transformation-invariant representations from elementary visual features. In previous computer simulation studies with rate-coded neural networks, the development of transformation-invariant representations has been demonstrated using either of two biologically plausible learning mechanisms, Trace learning and Continuous Transformation (CT) learning. However, it has not previously been investigated how transformation-invariant representations may be learned in a more biologically accurate spiking neural network. A key issue is how the synaptic connection strengths in such a spiking network might self-organize through Spike-Time Dependent Plasticity (STDP) where the change in synaptic strength is dependent on the relative times of the spikes emitted by the presynaptic and postsynaptic neurons rather than simply correlated activity driving changes in synaptic efficacy. Here we present simulations with conductance-based integrate-and-fire (IF) neurons using a STDP learning rule to address these gaps in our understanding. It is demonstrated that with the appropriate selection of model parameters and training regime, the spiking network model can utilize either Trace-like or CT-like learning mechanisms to achieve transform-invariant representations.
@article{evans_transformation-invariant_2012,
	title = {Transformation-invariant visual representations in self-organizing spiking neural networks},
	volume = {6},
	url = {http://www.frontiersin.org/Computational_Neuroscience/10.3389/fncom.2012.00046/abstract},
	doi = {10.3389/fncom.2012.00046},
	abstract = {The ventral visual pathway achieves object and face recognition by building transformation-invariant representations from elementary visual features. In previous computer simulation studies with rate-coded neural networks, the development of transformation-invariant representations has been demonstrated using either of two biologically plausible learning mechanisms, Trace learning and Continuous Transformation (CT) learning. However, it has not previously been investigated how transformation-invariant representations may be learned in a more biologically accurate spiking neural network. A key issue is how the synaptic connection strengths in such a spiking network might self-organize through Spike-Time Dependent Plasticity (STDP) where the change in synaptic strength is dependent on the relative times of the spikes emitted by the presynaptic and postsynaptic neurons rather than simply correlated activity driving changes in synaptic efficacy. Here we present simulations with conductance-based integrate-and-fire (IF) neurons using a STDP learning rule to address these gaps in our understanding. It is demonstrated that with the appropriate selection of model parameters and training regime, the spiking network model can utilize either Trace-like or CT-like learning mechanisms to achieve transform-invariant representations.},
	urldate = {2013-08-01TZ},
	journal = {Frontiers in Computational Neuroscience},
	author = {Evans, Benjamin D. and Stringer, Simon M},
	year = {2012},
	keywords = {continuous transformation learning, inferior temporal cortex, integrate and fire, spiking neural net, trace learning, transformation-invariant visual object recognition},
	pages = {46}
}

Downloads: 0