LINEARLY CONVERGENT NONLINEAR CONJUGATE GRADIENT METHODS FOR A PARAMETER IDENTIFICATION PROBLEMS. Riahi, M K & Qattan, I A Appl. Comput. Math. (submitted), 2019.
LINEARLY CONVERGENT NONLINEAR CONJUGATE GRADIENT METHODS FOR A PARAMETER IDENTIFICATION PROBLEMS [link]Paper  abstract   bibtex   
This paper presents a general description of a parameter estimation inverse prob- lem for systems governed by nonlinear differential equations. The inverse problem is presented using optimal control tools with state constraints, where the minimization process is based on a first-order optimization technique such as adaptive monotony-backtracking steepest descent technique and nonlinear conjugate gradient methods satisfying strong Wolfe conditions. Global convergence theory of both methods is rigorously established where new linear convergence rates are reported. Indeed, for the nonlinear non-convex optimization we show that under the Lipschitz-continuous condition of the gradient of the objective function we have a linear con- vergence rate toward a stationary point. Furthermore, nonlinear conjugate gradient method is shown to be linearly convergent toward stationary points where the second derivative of the ob- jective function is bounded. The convergence analysis in this work has been established in a general nonlinear non-convex optimization under constraints framework where the considered time-dependent model could be a system of coupled ordinary differential equations or partial dif- ferential equations. Numerical evidence on a selection of popular nonlinear models is presented to support the theoretical results.
@article{preprint,
abstract = {This paper presents a general description of a parameter estimation inverse prob- lem for systems governed by nonlinear differential equations. The inverse problem is presented using optimal control tools with state constraints, where the minimization process is based on a first-order optimization technique such as adaptive monotony-backtracking steepest descent technique and nonlinear conjugate gradient methods satisfying strong Wolfe conditions. Global convergence theory of both methods is rigorously established where new linear convergence rates are reported. Indeed, for the nonlinear non-convex optimization we show that under the Lipschitz-continuous condition of the gradient of the objective function we have a linear con- vergence rate toward a stationary point. Furthermore, nonlinear conjugate gradient method is shown to be linearly convergent toward stationary points where the second derivative of the ob- jective function is bounded. The convergence analysis in this work has been established in a general nonlinear non-convex optimization under constraints framework where the considered time-dependent model could be a system of coupled ordinary differential equations or partial dif- ferential equations. Numerical evidence on a selection of popular nonlinear models is presented to support the theoretical results.},
author = {Riahi, M K and Qattan, I A},
journal = {Appl. Comput. Math. (submitted)},
keywords = {Convergence analysis,Dynamical systems,Nonlinear Conjugate gradient methods,Nonlinear Optimal control,Parameter estimation and Inverse problem},
title = {{LINEARLY CONVERGENT NONLINEAR CONJUGATE GRADIENT METHODS FOR A PARAMETER IDENTIFICATION PROBLEMS}},
url = {https://arxiv.org/abs/1806.10197},
year = {2019}
}

Downloads: 0