Iteration complexity of inexact augmented Lagrangian methods for constrained convex programming. Xu, Y. March, 2018. 11 citations (Semantic Scholar/arXiv) [2023-07-14] arXiv:1711.05812 [cs, math]Paper abstract bibtex Augmented Lagrangian method (ALM) has been popularly used for solving constrained optimization problems. Practically, subproblems for updating primal variables in the framework of ALM usually can only be solved inexactly. The convergence and local convergence speed of ALM have been extensively studied. However, the global convergence rate of inexact ALM is still open for problems with nonlinear inequality constraints. In this paper, we work on general convex programs with both equality and inequality constraints. For these problems, we establish the global convergence rate of inexact ALM and estimate its iteration complexity in terms of the number of gradient evaluations to produce a solution with a specified accuracy. We first establish an ergodic convergence rate result of inexact ALM that uses constant penalty parameters or geometrically increasing penalty parameters. Based on the convergence rate result, we apply Nesterov's optimal first-order method on each primal subproblem and estimate the iteration complexity of the inexact ALM. We show that if the objective is convex, then $O({\}varepsilon{\textasciicircum}\{-1\})$ gradient evaluations are sufficient to guarantee an ${\}varepsilon$-optimal solution in terms of both primal objective and feasibility violation. If the objective is strongly convex, the result can be improved to $O({\}varepsilon{\textasciicircum}\{-{\}frac\{1\}\{2\}\}{\textbar}{\}log{\}varepsilon{\textbar})$. Finally, by relating to the inexact proximal point algorithm, we establish a nonergodic convergence rate result of inexact ALM that uses geometrically increasing penalty parameters. We show that the nonergodic iteration complexity result is in the same order as that for the ergodic result. Numerical experiments on quadratically constrained quadratic programming are conducted to compare the performance of the inexact ALM with different settings.
@misc{xu_iteration_2018,
title = {Iteration complexity of inexact augmented {Lagrangian} methods for constrained convex programming},
url = {http://arxiv.org/abs/1711.05812},
abstract = {Augmented Lagrangian method (ALM) has been popularly used for solving constrained optimization problems. Practically, subproblems for updating primal variables in the framework of ALM usually can only be solved inexactly. The convergence and local convergence speed of ALM have been extensively studied. However, the global convergence rate of inexact ALM is still open for problems with nonlinear inequality constraints. In this paper, we work on general convex programs with both equality and inequality constraints. For these problems, we establish the global convergence rate of inexact ALM and estimate its iteration complexity in terms of the number of gradient evaluations to produce a solution with a specified accuracy. We first establish an ergodic convergence rate result of inexact ALM that uses constant penalty parameters or geometrically increasing penalty parameters. Based on the convergence rate result, we apply Nesterov's optimal first-order method on each primal subproblem and estimate the iteration complexity of the inexact ALM. We show that if the objective is convex, then \$O({\textbackslash}varepsilon{\textasciicircum}\{-1\})\$ gradient evaluations are sufficient to guarantee an \${\textbackslash}varepsilon\$-optimal solution in terms of both primal objective and feasibility violation. If the objective is strongly convex, the result can be improved to \$O({\textbackslash}varepsilon{\textasciicircum}\{-{\textbackslash}frac\{1\}\{2\}\}{\textbar}{\textbackslash}log{\textbackslash}varepsilon{\textbar})\$. Finally, by relating to the inexact proximal point algorithm, we establish a nonergodic convergence rate result of inexact ALM that uses geometrically increasing penalty parameters. We show that the nonergodic iteration complexity result is in the same order as that for the ergodic result. Numerical experiments on quadratically constrained quadratic programming are conducted to compare the performance of the inexact ALM with different settings.},
language = {en},
urldate = {2023-07-12},
publisher = {arXiv},
author = {Xu, Yangyang},
month = mar,
year = {2018},
note = {11 citations (Semantic Scholar/arXiv) [2023-07-14]
arXiv:1711.05812 [cs, math]},
keywords = {/unread, 90C06, 90C25, 68W40, 49M27, Computer Science - Data Structures and Algorithms, Mathematics - Numerical Analysis, Mathematics - Optimization and Control},
}
Downloads: 0
{"_id":"QbGM8sY3gEYm3D43E","bibbaseid":"xu-iterationcomplexityofinexactaugmentedlagrangianmethodsforconstrainedconvexprogramming-2018","author_short":["Xu, Y."],"bibdata":{"bibtype":"misc","type":"misc","title":"Iteration complexity of inexact augmented Lagrangian methods for constrained convex programming","url":"http://arxiv.org/abs/1711.05812","abstract":"Augmented Lagrangian method (ALM) has been popularly used for solving constrained optimization problems. Practically, subproblems for updating primal variables in the framework of ALM usually can only be solved inexactly. The convergence and local convergence speed of ALM have been extensively studied. However, the global convergence rate of inexact ALM is still open for problems with nonlinear inequality constraints. In this paper, we work on general convex programs with both equality and inequality constraints. For these problems, we establish the global convergence rate of inexact ALM and estimate its iteration complexity in terms of the number of gradient evaluations to produce a solution with a specified accuracy. We first establish an ergodic convergence rate result of inexact ALM that uses constant penalty parameters or geometrically increasing penalty parameters. Based on the convergence rate result, we apply Nesterov's optimal first-order method on each primal subproblem and estimate the iteration complexity of the inexact ALM. We show that if the objective is convex, then $O({\\}varepsilon{\\textasciicircum}\\{-1\\})$ gradient evaluations are sufficient to guarantee an ${\\}varepsilon$-optimal solution in terms of both primal objective and feasibility violation. If the objective is strongly convex, the result can be improved to $O({\\}varepsilon{\\textasciicircum}\\{-{\\}frac\\{1\\}\\{2\\}\\}{\\textbar}{\\}log{\\}varepsilon{\\textbar})$. Finally, by relating to the inexact proximal point algorithm, we establish a nonergodic convergence rate result of inexact ALM that uses geometrically increasing penalty parameters. We show that the nonergodic iteration complexity result is in the same order as that for the ergodic result. Numerical experiments on quadratically constrained quadratic programming are conducted to compare the performance of the inexact ALM with different settings.","language":"en","urldate":"2023-07-12","publisher":"arXiv","author":[{"propositions":[],"lastnames":["Xu"],"firstnames":["Yangyang"],"suffixes":[]}],"month":"March","year":"2018","note":"11 citations (Semantic Scholar/arXiv) [2023-07-14] arXiv:1711.05812 [cs, math]","keywords":"/unread, 90C06, 90C25, 68W40, 49M27, Computer Science - Data Structures and Algorithms, Mathematics - Numerical Analysis, Mathematics - Optimization and Control","bibtex":"@misc{xu_iteration_2018,\n\ttitle = {Iteration complexity of inexact augmented {Lagrangian} methods for constrained convex programming},\n\turl = {http://arxiv.org/abs/1711.05812},\n\tabstract = {Augmented Lagrangian method (ALM) has been popularly used for solving constrained optimization problems. Practically, subproblems for updating primal variables in the framework of ALM usually can only be solved inexactly. The convergence and local convergence speed of ALM have been extensively studied. However, the global convergence rate of inexact ALM is still open for problems with nonlinear inequality constraints. In this paper, we work on general convex programs with both equality and inequality constraints. For these problems, we establish the global convergence rate of inexact ALM and estimate its iteration complexity in terms of the number of gradient evaluations to produce a solution with a specified accuracy. We first establish an ergodic convergence rate result of inexact ALM that uses constant penalty parameters or geometrically increasing penalty parameters. Based on the convergence rate result, we apply Nesterov's optimal first-order method on each primal subproblem and estimate the iteration complexity of the inexact ALM. We show that if the objective is convex, then \\$O({\\textbackslash}varepsilon{\\textasciicircum}\\{-1\\})\\$ gradient evaluations are sufficient to guarantee an \\${\\textbackslash}varepsilon\\$-optimal solution in terms of both primal objective and feasibility violation. If the objective is strongly convex, the result can be improved to \\$O({\\textbackslash}varepsilon{\\textasciicircum}\\{-{\\textbackslash}frac\\{1\\}\\{2\\}\\}{\\textbar}{\\textbackslash}log{\\textbackslash}varepsilon{\\textbar})\\$. Finally, by relating to the inexact proximal point algorithm, we establish a nonergodic convergence rate result of inexact ALM that uses geometrically increasing penalty parameters. We show that the nonergodic iteration complexity result is in the same order as that for the ergodic result. Numerical experiments on quadratically constrained quadratic programming are conducted to compare the performance of the inexact ALM with different settings.},\n\tlanguage = {en},\n\turldate = {2023-07-12},\n\tpublisher = {arXiv},\n\tauthor = {Xu, Yangyang},\n\tmonth = mar,\n\tyear = {2018},\n\tnote = {11 citations (Semantic Scholar/arXiv) [2023-07-14]\narXiv:1711.05812 [cs, math]},\n\tkeywords = {/unread, 90C06, 90C25, 68W40, 49M27, Computer Science - Data Structures and Algorithms, Mathematics - Numerical Analysis, Mathematics - Optimization and Control},\n}\n\n","author_short":["Xu, Y."],"key":"xu_iteration_2018","id":"xu_iteration_2018","bibbaseid":"xu-iterationcomplexityofinexactaugmentedlagrangianmethodsforconstrainedconvexprogramming-2018","role":"author","urls":{"Paper":"http://arxiv.org/abs/1711.05812"},"keyword":["/unread","90C06","90C25","68W40","49M27","Computer Science - Data Structures and Algorithms","Mathematics - Numerical Analysis","Mathematics - Optimization and Control"],"metadata":{"authorlinks":{}},"html":""},"bibtype":"misc","biburl":"https://bibbase.org/zotero/victorjhu","dataSources":["CmHEoydhafhbkXXt5"],"keywords":["/unread","90c06","90c25","68w40","49m27","computer science - data structures and algorithms","mathematics - numerical analysis","mathematics - optimization and control"],"search_terms":["iteration","complexity","inexact","augmented","lagrangian","methods","constrained","convex","programming","xu"],"title":"Iteration complexity of inexact augmented Lagrangian methods for constrained convex programming","year":2018}