An interior point-proximal method of multipliers for convex quadratic programming (original) (raw)
Related papers
A primal–dual regularized interior-point method for convex quadratic programs
Mathematical Programming Computation, 2012
Interior-point methods in augmented form for linear and convex quadratic programming require the solution of a sequence of symmetric indefinite linear systems which are used to derive search directions. Safeguards are typically required in order to handle free variables or rank-deficient Jacobians. We propose a consistent framework and accompanying theoretical justification for regularizing these linear systems. Our approach can be interpreted as a simultaneous proximal-point regularization of the primal and dual problems. The regularization is termed exact to emphasize that, although the problems are regularized, the algorithm recovers a solution of the original problem, for appropriate values of the regularization parameters.
An Interior Point-Proximal Method of Multipliers for Positive Semi-Definite Programming
arXiv (Cornell University), 2020
In this paper we generalize the Interior Point-Proximal Method of Multipliers (IP-PMM) presented in [An Interior Point-Proximal Method of Multipliers for Convex Quadratic Programming, Computational Optimization and Applications, 78, 307-351 (2021)] for the solution of linear positive Semi-Definite Programming (SDP) problems, allowing inexactness in the solution of the associated Newton systems. In particular, we combine an infeasible Interior Point Method (IPM) with the Proximal Method of Multipliers (PMM) and interpret the algorithm (IP-PMM) as a primal-dual regularized IPM, suitable for solving SDP problems. We apply some iterations of an IPM to each sub-problem of the PMM until a satisfactory solution is found. We then update the PMM parameters, form a new IPM neighbourhood, and repeat this process. Given this framework, we prove polynomial complexity of the algorithm, under mild assumptions, and without requiring exact computations for the Newton directions. We furthermore provide a necessary condition for lack of strong duality, which can be used as a basis for constructing detection mechanisms for identifying pathological cases within IP-PMM.
An Interior Point-Proximal Method of Multipliers for Linear Positive Semi-Definite Programming
Journal of Optimization Theory and Applications
In this paper we generalize the Interior Point-Proximal Method of Multipliers (IP-PMM) presented in Pougkakiotis and Gondzio (Comput Optim Appl 78:307–351, 2021. 10.1007/s10589-020-00240-9) for the solution of linear positive Semi-Definite Programming (SDP) problems, allowing inexactness in the solution of the associated Newton systems. In particular, we combine an infeasible Interior Point Method (IPM) with the Proximal Method of Multipliers (PMM) and interpret the algorithm (IP-PMM) as a primal-dual regularized IPM, suitable for solving SDP problems. We apply some iterations of an IPM to each sub-problem of the PMM until a satisfactory solution is found. We then update the PMM parameters, form a new IPM neighbourhood, and repeat this process. Given this framework, we prove polynomial complexity of the algorithm, under mild assumptions, and without requiring exact computations for the Newton directions. We furthermore provide a necessary condition for lack of strong duality, which ...
Numerical Linear Algebra with Applications
In this paper, we address the efficient numerical solution of linear and quadratic programming problems, often of large scale. With this aim, we devise an infeasible interior point method, blended with the proximal method of multipliers, which in turn results in a primal-dual regularized interior point method. Application of this method gives rise to a sequence of increasingly ill-conditioned linear systems which cannot always be solved by factorization methods, due to memory and CPU time restrictions. We propose a novel preconditioning strategy which is based on a suitable sparsification of the normal equations matrix in the linear case, and also constitutes the foundation of a block-diagonal preconditioner to accelerate MINRES for linear systems arising from the solution of general quadratic programming problems. Numerical results for a range of test problems demonstrate the robustness of the proposed preconditioning strategy, together with its ability to solve linear systems of very large dimension.
2014
This paper presents and studies the iteration-complexity of two new inexact variants of Rockafellar’s proximal method of multipliers (PMM) for solving convex programming (CP) problems with a finite number of functional inequality constraints. In contrast to the first variant which solves convex quadratic programming (QP) subproblems at every iteration, the second one solves convex constrained quadratic QP subproblems. Their complexity analysis are performed by: a) viewing the original CP problem as a monotone inclusion problem (MIP); b) proposing a largestep inexact higher-order proximal extragradient framework for MIPs, and; c) showing that the above two PMM variants are just instances of this framework. 2000 Mathematics Subject Classification: 90C25, 90C30, 47H05.
A primal-dual proximal point algorithm for constrained convex programs
Applied Mathematics and Computation, 2005
We present a primal-dual application of the proximal point algorithm to solve convex constrained minimization problems. Motivated by the work of Eckstein [Math. Oper. Res. 18 (1993) 203] about the generalized proximal point method, we propose here a mixed proximal multipliers method where we improve the result of Eckstein [Math. Oper. Res. 18 (1993) 203] Theorem 1.
Optimization
This paper studies the iteration-complexity of a new primal-dual algorithm based on Rockafellar's proximal method of multipliers (PMM) for solving smooth convex programming problems with inequality constraints. In each step, either a step of Rockafellar's PMM for a second-order model of the problem is computed or a relaxed extragradient step is performed. The resulting algorithm is a (large-step) relaxed hybrid proximal extragradient (r-HPE) method of multipliers, which combines Rockafellar's PMM with the r-HPE method.
SIAM Journal on Optimization
This paper analyzes the iteration-complexity of a quadratic penalty accelerated inexact proximal point method for solving linearly constrained nonconvex composite programs. More specifically, the objective function is of the form f + h where f is a differentiable function whose gradient is Lipschitz continuous and h is a closed convex function with possibly unbounded domain. The method, basically, consists of applying an accelerated inexact proximal point method for solving approximately a sequence of quadratic penalized subproblems associated to the linearly constrained problem. Each subproblem of the proximal point method is in turn approximately solved by an accelerated composite gradient (ACG) method. It is shown that the proposed scheme generates a ρ−approximate stationary point in at most O(ρ −3) ACG iterations. Finally, numerical results showing the efficiency of the proposed method are also given.
Penalty/Barrier Multiplier Methods for Convex Programming Problems
SIAM Journal on Optimization, 1997
We study a class of methods for solving convex programs, which are based on nonquadratic Augmented Lagrangians for which the penalty parameters are functions of the multipliers. This gives rise to lagrangians which are nonlinear in the multipliers. Each augmented lagrangian is speci ed by a choice of a penalty function ' and a penalty-updating function . The requirements on ' are mild, and allow for the inclusion of most of the previously suggested augmented lagrangians. More importantly, a new type of penalty/barrier function (having a logarithmic branch glued to a quadratic branch) is introduced and used to construct an e cient algorithm. Convergence of the algorithms is proved for the case of being a sublinear function of the dual multipliers. The algorithms are tested on large-scale quadratically constrained problems arising in structural optimization.
A multiplier method with a class of penalty functions for convex programming ‡
2016
We consider a class of augmented Lagrangian methods for solving convex programming problems with inequality constraints. This class involves a family of penalty functions and specific values of parameters p, q, ỹ ∈ R and c > 0. The penalty family includes the classical modified barrier and the exponential function. The associated proximal method for solving the dual problem is also considered. Convergence results are shown, specifically we prove that any limit point of the primal and the dual sequence generated by the algorithms are optimal solutions of the primal and dual problem respectively.