Averaging and Derivative Estimation Within Stochastic Approximation Algorithms. Hashemi, F. & Pasupathy, R. In Laroque, C., Himmelspach, J., Pasupathy, R., Rose, O., & Uhrmacher, A., editors, Proceedings of the 2012 Winter Simulation Conference, pages 232–240, Piscataway, NJ, 2012. Institute of Electrical and Electronics Engineers, Inc..
Averaging and Derivative Estimation Within Stochastic Approximation Algorithms [pdf]Paper  abstract   bibtex   
Stochastic Approximation (SA) is arguably the most investigated amongst algorithms for solving local continuous simulation-optimization problems. Despite its enduring popularity, the prevailing opinion is that the finite-time performance of SA-type algorithms is not robust to SA's sequence of algorithm parameters. In the last two decades, two major advances have been proposed toward alleviating this: (i) Polyak-Ruppert averaging where SA is executed in multiple time scales to allow iterates to use large (initial) step sizes for better finite-time performance, without sacrificing the asymptotic convergence rate; and (ii) efficient derivative estimation to allow better searching within the solution space. Interestingly, however, all existing literature on SA seems to treat each of these advances separately. In this article, we present two results which characterize convergence rates when (i) and (ii) are be applied simultaneously. Our results should be seen as providing a theoretical basis for applying ideas that seem to work in practice.

Downloads: 0