An exact penalty-Lagrangian approach for large-scale nonlinear programming (original) (raw)
Related papers
Fruitful uses of smooth exact merit functions in constrained optimization
Applied Optimization, 2003
In this paper we are concerned with continuously differentiable exact merit functions as a mean to solve constrained optimization problems even of considerable dimension. In order to give a complete understanding of the fundamental properties of exact merit functions, we first review the development of smooth and exact merit functions. A recently proposed shifted barrier augmented Lagrangian function is then presented as a potentially powerful tool to solve large scale constrained optimization problems. This latter merit function, rather than directly minimized, can be more fruitfully used to globalize efficient local algorithms, thus obtaining methods suitable for large scale problems. Moreover, by carefully choosing the search directions and the linesearch strategy, it is possible to define algorithms which are superlinearly convergent towards points satisfying first and second order necessary optimality conditions. We propose a general scheme for an algorithm employing such a merit function.
Mathematical Programming, 1986
In this paper a new continuously differentiable exact penalty function is introduced for the solution of nonlinear programming problems with compact feasible set. A distinguishing feature of the penalty function is that it is defined on a suitable bounded open set containing the feasible region and that it goes to infinity on the boundary of this set. This allows the construction of an implementable unconstrained minimization algorithm, whose global convergence towards Kuhn-Tucker points of the constrained problem can be established.
An approach to constrained global optimization based on exact penalty functions
Journal of Global Optimization, 2012
In the field of global optimization many efforts have been devoted to solve unconstrained global optimization problems. The aim of this paper is to show that unconstrained global optimization methods can be used also for solving constrained optimization problems, by resorting to an exact penalty approach. In particular, we make use of a nondifferentiable exact penalty function P q (x; ε). We show that, under weak assumptions, there exists a threshold valueε > 0 of the penalty parameter ε such that, for any ε ∈ (0,ε], any global minimizer of P q is a global solution of the related constrained problem and conversely. On these bases, we describe an algorithm that, by combining an unconstrained global minimization technique for minimizing P q for given values of the penalty parameter ε and an automatic updating of ε that occurs only a finite number of times, produces a sequence {x k } such that any limit point of the sequence is a global solution of the related constrained problem. In the algorithm any efficient unconstrained global minimization technique can be used. In particular, we adopt an improved version of the DIRECT algorithm. Some numerical experimentation confirms the effectiveness of the approach.
A Globally Convergent Linearly Constrained Lagrangian Method for Nonlinear Optimization
SIAM Journal on Optimization, 2005
For optimization problems with nonlinear constraints, linearly constrained Lagrangian (LCL) methods solve a sequence of subproblems of the form "minimize an augmented Lagrangian function subject to linearized constraints." Such methods converge rapidly near a solution but may not be reliable from arbitrary starting points. Nevertheless, the well-known software package MINOS has proved effective on many large problems. Its success motivates us to derive a related LCL algorithm that possesses three important properties: it is globally convergent, the subproblem constraints are always feasible, and the subproblems may be solved inexactly.
Some theoretical properties of an augmented lagrangian merit function
1986
Sequential quadratic programming (SQP) methods for nonlinearly constrained optimization typically use a merit function to enforce convergence from an arbitrary starting point. We define a smooth augmented Lagrangian merit function in which the Lagrange multiplier estimate is treated as a separate variable, and inequality constraints are handled by means of non-negative slack variables that are included in the linesearch. Global convergence is proved for an SQP algorithm that uses this merit function. We also prove that steps of unity are accepted in a neighborhood of the solution when this merit function is used in a suitable superlinearly convergent algorithm. Finally, some numerical results are presented to illustrate the performance of the associated SQP method.
Siam Journal on Numerical Analysis, 1991
The global and local convergence properties of a class of augmented Lagrangian methods for solving nonlinear programming problems are considered. In such methods, simple bound constraints are treated separately from more general constraints and the stopping rules for the inner minimization algorithm have this in mind. Global convergence is proved, and it is established that a potentially troublesome penalty parameter is bounded away from zero.
Mathematical Programming, 2006
We propose two line search primal-dual interior-point methods that approximately solve a sequence of equality constrained barrier subproblems. To solve each subproblem, our methods apply a modified Newton method and use an 2-exact penalty function to attain feasibility. Our methods have strong global convergence properties under standard assumptions. Specifically, if the penalty parameter remains bounded, any limit point of the iterate sequence is either a KKT point of the barrier subproblem, or a Fritz-John (FJ) point of the original problem that fails to satisfy the Mangasarian-Fromovitz constraint qualification (MFCQ); if the penalty parameter tends to infinity, there is a limit point that is either an infeasible FJ point of the inequality constrained feasibility problem (an infeasible stationary point of the infeasibility measure if slack variables are added) or a FJ point of the original problem at which the MFCQ fails to hold. Numerical results are given that illustrate these outcomes.
An Exact Penalty Algorithm for Nonlinear Equality Constrained Optimization Problems
2007
In this paper we define a trust-region globalization strategies to solve a continuously differentiable nonlinear equality constrained minimization problem. The trust-region approach uses a penalty parameter that is proven to be uniformly bounded. Under rather weak hypotheses and without the usual regularity assumption that the linearized constraints gradients are linearly independent, we prove that the hybrid algorithm is globally convergent Moreover, under the standard hypotheses of the SQP method, we prove that the rate of convergence is q-quadratic.
Numerical study of augmented Lagrangian algorithms for constrained global optimization
Optimization, 2011
This paper presents a numerical study of two augmented Lagrangian algorithms to solve continuous constrained global optimization problems. The algorithms approximately solve a sequence of bound constrained subproblems whose objective function penalizes equality and inequality constraints violation and depends on the Lagrange multiplier vectors and a penalty parameter. Each subproblem is solved by a population-based method that uses an electromagnetism-like mechanism to move points towards optimality. Three local search procedures are tested to enhance the EM algorithm. Benchmark problems are solved in a performance evaluation of the proposed augmented Lagrangian methodologies. A comparison with other techniques presented in the literature is also reported.
A truncated Newton method in an augmented Lagrangian framework for nonlinear programming
Computational Optimization and Applications, 2010
In this paper we propose a primal-dual algorithm for the solution of general nonlinear programming problems. The core of the method is a local algorithm which relies on a truncated procedure for the computation of a search direction, and is thus suitable for large scale problems. The truncated direction produces a sequence of points which locally converges to a KKT pair with superlinear convergence rate.