A learning result for continuous-time recurrent neural networks. Sontag, E. Systems Control Lett., 34(3):151–158, Elsevier Science Publishers B. V., Amsterdam, The Netherlands, The Netherlands, 1998.
doi  abstract   bibtex   
The following learning problem is considered, for continuous-time recurrent neural networks having sigmoidal activation functions. Given a ``black box'' representing an unknown system, measurements of output derivatives are collected, for a set of randomly generated inputs, and a network is used to approximate the observed behavior. It is shown that the number of inputs needed for reliable generalization (the sample complexity of the learning problem) is upper bounded by an expression that grows polynomially with the dimension of the network and logarithmically with the number of output derivatives being matched.
@ARTICLE{MR1632338,
   AUTHOR       = {E.D. Sontag},
   JOURNAL      = {Systems Control Lett.},
   TITLE        = {A learning result for continuous-time recurrent neural 
      networks},
   YEAR         = {1998},
   OPTMONTH     = {},
   OPTNOTE      = {},
   NUMBER       = {3},
   PAGES        = {151--158},
   VOLUME       = {34},
   ADDRESS      = {Amsterdam, The Netherlands, The Netherlands},
   KEYWORDS     = {neural networks, VC dimension, 
      recurrent neural networks},
   PUBLISHER    = {Elsevier Science Publishers B. V.},
   PDF          = {../../FTPDIR/recur-learn.pdf},
   ABSTRACT     = { The following learning problem is considered, for 
      continuous-time recurrent neural networks having sigmoidal activation 
      functions. Given a ``black box'' representing an unknown system, 
      measurements of output derivatives are collected, for a set of 
      randomly generated inputs, and a network is used to approximate the 
      observed behavior. It is shown that the number of inputs needed for 
      reliable generalization (the sample complexity of the learning 
      problem) is upper bounded by an expression that grows polynomially 
      with the dimension of the network and logarithmically with the number 
      of output derivatives being matched. },
   DOI          = {http://dx.doi.org/10.1016/S0167-6911(98)00006-1}
}

Downloads: 0