GRADIENT CONVERGENCE IN GRADIENT METHODS. Bertsekas, D. P & Tsitsiklis, J. N . Introduction.
abstract   bibtex   
For the classical gradient method Xt+l = xt - -ytVf(xt) and several deterministic and stochastic variants, we discuss the issue of convergence of the gradient sequence Vf(xt) and the attendant issue of stationarity of limit points of xt. W;"e assume that Vf is Lipschitz continuous, and that the stepsize at diminishes to 0 and satisfies standard stochastic approximation conditions. We show that either f(xt) - -oo or else f(xt) converges to a finite value and Vf(.t) – 0 (with probability 1 in the stochastic case). Existing results assume various boundedness conditions such as boundedness from below of f, or boundedness of Vf(xt), or boundedness of Xt.
@article{bertsekas_gradient_nodate,
	title = {{GRADIENT} {CONVERGENCE} {IN} {GRADIENT} {METHODS}},
	abstract = {For the classical gradient method Xt+l = xt - -ytVf(xt) and several deterministic and stochastic variants, we discuss the issue of convergence of the gradient sequence Vf(xt) and the attendant issue of stationarity of limit points of xt. W;"e assume that Vf is Lipschitz continuous, and that the stepsize at diminishes to 0 and satisfies standard stochastic approximation conditions. We show that either f(xt) - -oo or else f(xt) converges to a finite value and Vf(.t) -- 0 (with probability 1 in the stochastic case). Existing results assume various boundedness conditions such as boundedness from below of f, or boundedness of Vf(xt), or boundedness of Xt.},
	language = {en},
	journal = {. Introduction},
	author = {Bertsekas, Dimitri P and Tsitsiklis, John N},
	pages = {24}
}

Downloads: 0