A Comparison Between Spiking and Differentiable Recurrent Neural Networks on Spoken Digit Recognition. Graves, A., Beringer, N., & Schmidhuber, J. In The 23rd IASTED International Conference on modelling, identification, and control, Grindelwald, 2004.
abstract   bibtex   
In this paper we demonstrate that Long Short-Term Memory (LSTM) is a differentiable recurrent neural net (RNN) capable of robustly categorizing timewarped speech data. We measure its performance on a spoken digit identification task, where the data was spike-encoded in such a way that classifying the utterances became a difficult challenge in non-linear timewarping. We find that LSTM gives greatly superior results to an SNN found in the literature, and conclude that the architecture has a place in domains that require the learning of large timewarped datasets, such as automatic speech recognition.
@INPROCEEDINGS{graves+beringer+schmidhuber:2004,
  AUTHOR = {A. Graves and N. Beringer and J. Schmidhuber},
  TITLE = {A Comparison Between Spiking and Differentiable Recurrent Neural Networks on Spoken Digit Recognition},
  BOOKTITLE = {The 23rd IASTED International Conference on modelling, identification, and control},
  ADDRESS = {Grindelwald},
  YEAR = {2004},
  SOURCE = {OwnPublication},
  ABSTRACT = {In this paper we demonstrate that Long Short-Term Memory (LSTM) is a differentiable recurrent neural net (RNN)
		capable of robustly categorizing timewarped speech data. We measure its performance on a spoken digit 
		identification task, where the data was spike-encoded in such a way that classifying the utterances became a 
		difficult challenge in non-linear timewarping. We find that LSTM gives greatly superior results to an SNN found in the
		literature, and conclude that the architecture has a place in domains that require the learning of large timewarped datasets, 
		such as automatic speech recognition.}
}

Downloads: 0