A Newton-Type Method for Convex Quadratic Programming (original) (raw)

A Smooth Newton Method for Nonlinear Programming Problems with Inequality Constraints

The paper presents a reformulation of the Karush-Kuhn-Tucker (KKT) system associated nonlinear programming problem into an equivalent system of smooth equations. Classical Newton method is applied to solve the system of equations. The superlinear convergence of the primal sequence, generated by proposed method, is proved. The preliminary numerical results with a problems test set are presented.

On the minimum-norm solution of convex quadratic programming

2021

We discuss some basic concepts and present a numerical procedure for finding the minimum-norm solution of convex quadratic programs (QPs) subject to linear equality and inequality constraints. Our approach is based on a theorem of alternatives and on a convenient characterization of the solution set of convex QPs. We show that this problem can be reduced to a simple constrained minimization problem with a once-differentiable convex objective function. We use finite termination of an appropriate Newton’s method to solve this problem. Numerical results show that the proposed method is efficient.

An exterior Newton method for strictly convex quadratic programming

2000

We propose an exterior Newton method for strictly convex quadratic programming (QP) problems. This method is based on a dual formulation: a sequence of points is generated which monotonically decreases the dual objective function. We show that the generated sequence converges globally and quadratically to the solution (if the QP is feasible and certain nondegeneracy assumptions are satisfied). Measures for detecting infeasibility are provided. The major computation in each iteration is to solve a KKT-like system. Therefore, given an effective symmetric sparse linear solver, the proposed method is suitable for large sparse problems. Preliminary numerical results are reported.

Quadratic Programming with Quadratic Constraints

We give a quick and dirty, but reasonably safe, algorithm for the minimization of a convex quadratic function under convex quadratic constraints. The algorithm minimizes the Lagrangian dual by using a safeguarded Newton method with non-negativity constraints.

A semi-smooth Newton method for solving convex quadratic programming problem under simplicial cone constraint

In this paper the simplicial cone constrained convex quadratic programming problem is studied. The optimality conditions of this problem consist in a linear complementarity problem. This fact, under a suitable condition, leads to an equivalence between the simplicial cone constrained convex quadratic programming problem and the one of finding the unique solution of a nonsmooth system of equations. It is shown that a semi-smooth Newton method applied to this nonsmooth system of equations is always well defined and under a mild assumption on the simplicial cone the method generates a sequence that converges linearly to its solution. Besides, we also show that the generated sequence is bounded for any starting point and a formula for any accumulation point of this sequence is presented. The presented numerical results suggest that this approach achieves accurate solutions to large problems in few iterations.

A Newton-Type Algorithm for the Solution of Inequality Constrained Minimization Problems

Operations Research Proceedings, 1995

We describe a new Newton-type algorithm for the solution of inequality constrained minimization problems. The algorithm is based on an active-set strategy and, at each iteration, only requires the solution of one linear system. Under mild assumptions, and without requiring strict complementarity, we prove q-quadratic convergence of the primal variables towards the solution.

Quadratically constrained quadratic programming: Some applications and a method for solution

Zeitschrift für Operations Research, 1982

A constructive method is used to prove that n C. is not empty and that x (fi) with i~_M ~ fl ~ n C. chaxacterizes an optimal solution to (QPQR). Quite attractive numerical results have i~ t been reached with this method. Zusammenfassung: Die vorliegande Arbeit befatM sieh mit Anwendungen und einer neuen L6sungsmethode des folgenden Aufgabe (QPQR): man minimiese eine konvoxe quadratische Zidfunktion Q1 (x) untes Beriicksichtigung konvexer quadratischer Restriktionen Qi (x) < 0, i EMI := {2 ..... m}, und/odes lineaxes Resttiktionen.

New algorithms for solving unconstrained optimization problems based on the generalized Newton method involving simple quadrature rules

2017

Solving system of nonlinear equations is one of the most important problems in numerical analysis. Optimization problems can often be transformed into the equation F (x) = 0 with a nonsmooth function F , e.g. nonlinear complementarity problem or variational inequality problem. We consider some modifications of a generalized Newton method using some subdifferential (first of all B-differential) and based on some rules of quadrature. We use these algorithms for solving unconstrained optimization problems, in which the objective function has not differentiable gradient. Such problems can appear in optimization as subproblems. The proposed methods are locally and at least superlinearly convergent under mild conditions imposed on the gradient of the objective function. Finally, we present results of numerical tests.

Generalized quadratically constrained quadratic programming for signal processing

In this paper, we introduce and solve a particular generalization of the quadratically constrained quadratic programming (QCQP) problem which is frequently encountered in different fields of signal processing and communications. Specifically, we consider such generalization of the QCQP problem that comprises compositions of one-dimensional convex and quadratic functions in the constraint and the objective functions. We show that this class of problems can be precisely or approximately recast as the difference-of-convex functions (DC) programming problem. Although the DC programming problem can be solved through the branch-and-bound methods, these methods do not have any worst-case polynomialtime complexity guarantees. Therefore, we develop a new approach with worst-case polynomial-time complexity that can solve the corresponding DC problem of a generalized QCQP problem. It is analytically guaranteed that the point obtained by this method satisfies the Karsuh-Kuhn-Tucker (KKT) optimality conditions. Furthermore, the global optimality can be proved analytically under certain conditions. The new proposed method can be interpreted in terms of the Newton's method as applied to a non-constrained optimization problem.

Augmented Lagrangian applied to convex quadratic problems

Applied Mathematics and Computation, 2008

An algorithm based on the Augmented Lagrangian method is proposed to solve convex quadratic programming problem. The quadratic penalty is considered here. Hence, the Augmented Lagrangian function is quadratic when applied to quadratic programming problem. For this penalty, we show that if the Lagrangian function associated with the original problem is strict convex (or convex), then the hessian matrix of Augmented Lagrangian function is positive definite (or positive semi-definite). Numerical experiments are presented illustrating the performance of the algorithm for the CUTE test set.