quadprog - Quadratic programming - MATLAB (original) (raw)
Syntax
[[wsout,fval,exitflag,output,lambda] = quadprog(H,f,A,b,Aeq,beq,lb,ub,ws)](#d126e153894)
Description
Solver for quadratic objective functions with linear constraints.
quadprog finds a minimum for a problem specified by
H, A, and Aeq are matrices, and f, b, beq,lb, ub, and x are vectors.
You can pass f, lb, and ub as vectors or matrices; see Matrix Arguments.
[x](#mw%5F4fdf75e1-35ab-4c9d-aaf4-c81f677d0f62) = quadprog([H](#bssh6y6-1-H),[f](#d126e155083))
returns a vector x
that minimizes1/2*x'*H*x + f'*x
. The inputH
must be positive definite for the problem to have a finite minimum. If H
is positive definite, then the solutionx = H\(-f)
.
[x](#mw%5F4fdf75e1-35ab-4c9d-aaf4-c81f677d0f62) = quadprog([H](#bssh6y6-1-H),[f](#d126e155083),[A](#bssh6y6-1%5Fsep%5Fshared-A),[b](#bssh6y6-1%5Fsep%5Fshared-b))
minimizes 1/2*x'*H*x + f'*x
subject to the restrictions A*x
≤ b
. The inputA
is a matrix of doubles, and b
is a vector of doubles.
[x](#mw%5F4fdf75e1-35ab-4c9d-aaf4-c81f677d0f62) = quadprog([H](#bssh6y6-1-H),[f](#d126e155083),[A](#bssh6y6-1%5Fsep%5Fshared-A),[b](#bssh6y6-1%5Fsep%5Fshared-b),[Aeq](#bssh6y6-1%5Fsep%5Fshared-Aeq),[beq](#bssh6y6-1%5Fsep%5Fshared-beq))
solves the preceding problem subject to the additional restrictionsAeq*x = beq
. Aeq
is a matrix of doubles, and beq
is a vector of doubles. If no inequalities exist, set A = []
andb = []
.
[x](#mw%5F4fdf75e1-35ab-4c9d-aaf4-c81f677d0f62) = quadprog([H](#bssh6y6-1-H),[f](#d126e155083),[A](#bssh6y6-1%5Fsep%5Fshared-A),[b](#bssh6y6-1%5Fsep%5Fshared-b),[Aeq](#bssh6y6-1%5Fsep%5Fshared-Aeq),[beq](#bssh6y6-1%5Fsep%5Fshared-beq),[lb](#bssh6y6-1%5Fsep%5Fshared-lb),[ub](#bssh6y6-1%5Fsep%5Fshared-ub))
solves the preceding problem subject to the additional restrictionslb
≤x
≤ ub
. The inputs lb
and ub
are vectors of doubles, and the restrictions hold for each x
component. If no equalities exist, set Aeq = []
andbeq = []
.
Note
If the specified input bounds for a problem are inconsistent, the output x
is x0
and the outputfval
is []
.
quadprog
resets components ofx0
that violate the boundslb
≤x
≤ ub
to the interior of the box defined by the bounds.quadprog
does not change components that respect the bounds.
[x](#mw%5F4fdf75e1-35ab-4c9d-aaf4-c81f677d0f62) = quadprog([H](#bssh6y6-1-H),[f](#d126e155083),[A](#bssh6y6-1%5Fsep%5Fshared-A),[b](#bssh6y6-1%5Fsep%5Fshared-b),[Aeq](#bssh6y6-1%5Fsep%5Fshared-Aeq),[beq](#bssh6y6-1%5Fsep%5Fshared-beq),[lb](#bssh6y6-1%5Fsep%5Fshared-lb),[ub](#bssh6y6-1%5Fsep%5Fshared-ub),[x0](#bssh6y6-1-x0))
solves the preceding problem starting from the vector x0
. If no bounds exist, set lb = []
andub = []
. Some quadprog
algorithms ignore x0
; see x0.
Note
x0
is a required argument for the 'active-set'
algorithm.
[x](#mw%5F4fdf75e1-35ab-4c9d-aaf4-c81f677d0f62) = quadprog([H](#bssh6y6-1-H),[f](#d126e155083),[A](#bssh6y6-1%5Fsep%5Fshared-A),[b](#bssh6y6-1%5Fsep%5Fshared-b),[Aeq](#bssh6y6-1%5Fsep%5Fshared-Aeq),[beq](#bssh6y6-1%5Fsep%5Fshared-beq),[lb](#bssh6y6-1%5Fsep%5Fshared-lb),[ub](#bssh6y6-1%5Fsep%5Fshared-ub),[x0](#bssh6y6-1-x0),[options](#btm48d1))
solves the preceding problem using the optimization options specified inoptions
. Use optimoptions to createoptions
. If you do not want to give an initial point, setx0 = []
.
[x](#mw%5F4fdf75e1-35ab-4c9d-aaf4-c81f677d0f62) = quadprog([problem](#bssh6y6-1-problem))
returns the minimum for problem
, a structure described inproblem. Create theproblem
structure using dot notation or the struct function. Alternatively, create a problem
structure from anOptimizationProblem
object by using prob2struct.
[[x](#mw%5F4fdf75e1-35ab-4c9d-aaf4-c81f677d0f62),[fval](#mw%5Fc29db30c-279c-4bba-b6b4-fd6196d15f28)] = quadprog(___)
, for any input variables, also returns fval
, the value of the objective function at x
:
[[x](#mw%5F4fdf75e1-35ab-4c9d-aaf4-c81f677d0f62),[fval](#mw%5Fc29db30c-279c-4bba-b6b4-fd6196d15f28),[exitflag](#mw%5Fbd42ef06-6096-4303-afaa-7b3cb9c539b6),[output](#bssh6y6-1-output)] = quadprog(___)
also returns exitflag
, an integer that describes the exit condition of quadprog
, and output
, a structure that contains information about the optimization.
[[x](#mw%5F4fdf75e1-35ab-4c9d-aaf4-c81f677d0f62),[fval](#mw%5Fc29db30c-279c-4bba-b6b4-fd6196d15f28),[exitflag](#mw%5Fbd42ef06-6096-4303-afaa-7b3cb9c539b6),[output](#bssh6y6-1-output),[lambda](#mw%5Fdf9dee07-a502-4d9f-a102-da6d9bee146d)] = quadprog(___)
also returns lambda
, a structure whose fields contain the Lagrange multipliers at the solution x
.
[[wsout](#mw%5Fe9fc2927-1025-44b3-8fbe-33ba193e87bd),[fval](#mw%5Fc29db30c-279c-4bba-b6b4-fd6196d15f28),[exitflag](#mw%5Fbd42ef06-6096-4303-afaa-7b3cb9c539b6),[output](#bssh6y6-1-output),[lambda](#mw%5Fdf9dee07-a502-4d9f-a102-da6d9bee146d)] = quadprog([H](#bssh6y6-1-H),[f](#d126e155083),[A](#bssh6y6-1%5Fsep%5Fshared-A),[b](#bssh6y6-1%5Fsep%5Fshared-b),[Aeq](#bssh6y6-1%5Fsep%5Fshared-Aeq),[beq](#bssh6y6-1%5Fsep%5Fshared-beq),[lb](#bssh6y6-1%5Fsep%5Fshared-lb),[ub](#bssh6y6-1%5Fsep%5Fshared-ub),[ws](#bssh6y6-1%5Fsep%5Fmw%5F494dac9e-16e4-4112-8e3d-24f28cc8b395))
starts quadprog
from the data in the warm start objectws
, using the options in ws
. The returned argument wsout
contains the solution point inwsout.X
. By using wsout
as the initial warm start object in a subsequent solver call, quadprog
can work faster.
Examples
Find the minimum of
f(x)=12x12+x22-x1x2-2x1-6x2
subject to the constraints
x1+x2≤2-x1+2x2≤22x1+x2≤3.
In quadprog
syntax, this problem is to minimize
f(x)=12xTHx+fTx,
where
H=[1-1-12]f=[-2-6],
subject to the linear constraints.
To solve this problem, first enter the coefficient matrices.
H = [1 -1; -1 2]; f = [-2; -6]; A = [1 1; -1 2; 2 1]; b = [2; 2; 3];
Call quadprog
.
[x,fval,exitflag,output,lambda] = ... quadprog(H,f,A,b);
Minimum found that satisfies the constraints.
Optimization completed because the objective function is non-decreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance.
Examine the final point, function value, and exit flag.
An exit flag of 1
means the result is a local minimum. Because H
is a positive definite matrix, this problem is convex, so the minimum is a global minimum.
Confirm that H
is positive definite by checking its eigenvalues.
Find the minimum of
f(x)=12x12+x22-x1x2-2x1-6x2
subject to the constraint
x1+x2=0.
In quadprog
syntax, this problem is to minimize
f(x)=12xTHx+fTx,
where
H=[1-1-12]f=[-2-6],
subject to the linear constraint.
To solve this problem, first enter the coefficient matrices.
H = [1 -1; -1 2]; f = [-2; -6]; Aeq = [1 1]; beq = 0;
Call quadprog
, entering []
for the inputs A
and b
.
[x,fval,exitflag,output,lambda] = ... quadprog(H,f,[],[],Aeq,beq);
Minimum found that satisfies the constraints.
Optimization completed because the objective function is non-decreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance.
Examine the final point, function value, and exit flag.
An exit flag of 1
means the result is a local minimum. Because H
is a positive definite matrix, this problem is convex, so the minimum is a global minimum.
Confirm that H
is positive definite by checking its eigenvalues.
Find the x that minimizes the quadratic expression
12xTHx+fTx
where
H=[1-11-12-21-24], f=[2-31],
subject to the constraints
0≤x≤1, ∑x=1/2.
To solve this problem, first enter the coefficients.
H = [1,-1,1 -1,2,-2 1,-2,4]; f = [2;-3;1]; lb = zeros(3,1); ub = ones(size(lb)); Aeq = ones(1,3); beq = 1/2;
Call quadprog
, entering []
for the inputs A
and b
.
x = quadprog(H,f,[],[],Aeq,beq,lb,ub)
Minimum found that satisfies the constraints.
Optimization completed because the objective function is non-decreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance.
x = 3×1
0.0000
0.5000
0.0000
Set options to monitor the progress of quadprog
.
options = optimoptions('quadprog','Display','iter');
Define a problem with a quadratic objective and linear inequality constraints.
H = [1 -1; -1 2]; f = [-2; -6]; A = [1 1; -1 2; 2 1]; b = [2; 2; 3];
To help write the quadprog
function call, set the unnecessary inputs to []
.
Aeq = []; beq = []; lb = []; ub = []; x0 = [];
Call quadprog
to solve the problem.
x = quadprog(H,f,A,b,Aeq,beq,lb,ub,x0,options)
Iter Fval Primal Infeas Dual Infeas Complementarity
0 -8.884885e+00 3.214286e+00 1.071429e-01 1.000000e+00
1 -8.331868e+00 1.321041e-01 4.403472e-03 1.910489e-01
2 -8.212804e+00 1.676295e-03 5.587652e-05 1.009601e-02
3 -8.222204e+00 8.381476e-07 2.793826e-08 1.809485e-05
4 -8.222222e+00 4.190870e-11 1.396883e-12 9.047989e-10
Minimum found that satisfies the constraints.
Optimization completed because the objective function is non-decreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance.
Create a problem
structure using a Problem-Based Optimization Workflow. Create an optimization problem equivalent to Quadratic Program with Linear Constraints.
x = optimvar('x',2); objec = x(1)^2/2 + x(2)^2 - x(1)x(2) - 2x(1) - 6x(2); prob = optimproblem('Objective',objec); prob.Constraints.cons1 = sum(x) <= 2; prob.Constraints.cons2 = -x(1) + 2x(2) <= 2; prob.Constraints.cons3 = 2*x(1) + x(2) <= 3;
Convert prob
to a problem
structure.
problem = prob2struct(prob);
Solve the problem using quadprog
.
[x,fval] = quadprog(problem)
Warning: Your Hessian is not symmetric. Resetting H=(H+H')/2.
Minimum found that satisfies the constraints.
Optimization completed because the objective function is non-decreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance.
Solve a quadratic program and return both the solution and the objective function value.
H = [1,-1,1 -1,2,-2 1,-2,4]; f = [-7;-12;-15]; A = [1,1,1]; b = 3; [x,fval] = quadprog(H,f,A,b)
Minimum found that satisfies the constraints.
Optimization completed because the objective function is non-decreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance.
x = 3×1
-3.5714 2.9286 3.6429
Check that the returned objective function value matches the value computed from the quadprog
objective function definition.
fval2 = 1/2*x'Hx + f'*x
To see the optimization process for quadprog
, set options to show an iterative display and return four outputs. The problem is to minimize
12xTHx+fTx
subject to
0≤x≤1,
where
H=[21-11312-1125], f=[4-712].
Enter the problem coefficients.
H = [2 1 -1 1 3 1/2 -1 1/2 5]; f = [4;-7;12]; lb = zeros(3,1); ub = ones(3,1);
Set the options to display iterative progress of the solver.
options = optimoptions('quadprog','Display','iter');
Call quadprog
with four outputs.
[x fval,exitflag,output] = quadprog(H,f,[],[],[],[],lb,ub,[],options)
Iter Fval Primal Infeas Dual Infeas Complementarity
0 2.691769e+01 1.582123e+00 1.712849e+01 1.680447e+00
1 -3.889430e+00 0.000000e+00 8.564246e-03 9.971731e-01
2 -5.451769e+00 0.000000e+00 4.282123e-06 2.710131e-02
3 -5.499995e+00 0.000000e+00 2.878422e-10 1.750743e-06
4 -5.500000e+00 0.000000e+00 1.454808e-13 8.753723e-10
Minimum found that satisfies the constraints.
Optimization completed because the objective function is non-decreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance.
x = 3×1
0.0000
1.0000
0.0000
output = struct with fields: message: 'Minimum found that satisfies the constraints.↵↵Optimization completed because the objective function is non-decreasing in ↵feasible directions, to within the value of the optimality tolerance,↵and constraints are satisfied to within the value of the constraint tolerance.↵↵↵↵Optimization completed: The relative dual feasibility, 1.212340e-14,↵is less than options.OptimalityTolerance = 1.000000e-08, the complementarity measure,↵8.753723e-10, is less than options.OptimalityTolerance, and the relative maximum constraint↵violation, 0.000000e+00, is less than options.ConstraintTolerance = 1.000000e-08.' algorithm: 'interior-point-convex' firstorderopt: 2.3577e-09 constrviolation: 0 iterations: 4 linearsolver: 'dense' cgiterations: []
Solve a quadratic programming problem and return the Lagrange multipliers.
H = [1,-1,1 -1,2,-2 1,-2,4]; f = [-7;-12;-15]; A = [1,1,1]; b = 3; lb = zeros(3,1); [x,fval,exitflag,output,lambda] = quadprog(H,f,A,b,[],[],lb);
Minimum found that satisfies the constraints.
Optimization completed because the objective function is non-decreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance.
Examine the Lagrange multiplier structure lambda
.
ineqlin: 12.0000
eqlin: [0×1 double]
lower: [3×1 double]
upper: [3×1 double]
The linear inequality constraint has an associated Lagrange multiplier of 12
.
Display the multipliers associated with the lower bound.
Only the first component of lambda.lower
has a nonzero multiplier. This generally means that only the first component of x
is at the lower bound of zero. Confirm by displaying the components of x
.
To speed subsequent quadprog
calls, create a warm start object.
options = optimoptions('quadprog','Algorithm','active-set'); x0 = [1 2 3]; ws = optimwarmstart(x0,options);
Solve a quadratic program using ws
.
H = [1,-1,1 -1,2,-2 1,-2,4]; f = [-7;-12;-15]; A = [1,1,1]; b = 3; lb = zeros(3,1); tic [ws,fval,exitflag,output,lambda] = quadprog(H,f,A,b,[],[],lb,[],ws);
Minimum found that satisfies the constraints.
Optimization completed because the objective function is non-decreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance.
Elapsed time is 0.060411 seconds.
Change the objective function and solve the problem again.
f = [-10;-15;-20];
tic [ws,fval,exitflag,output,lambda] = quadprog(H,f,A,b,[],[],lb,[],ws);
Minimum found that satisfies the constraints.
Optimization completed because the objective function is non-decreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance.
Elapsed time is 0.010756 seconds.
Input Arguments
Quadratic objective term, specified as a symmetric real matrix.H
represents the quadratic in the expression1/2*x'*H*x + f'*x
. If H
is not symmetric, quadprog
issues a warning and uses the symmetrized version (H + H')/2
instead.
If the quadratic matrix H
is sparse, then by default, the 'interior-point-convex'
algorithm uses a slightly different algorithm than when H
is dense. Generally, the sparse algorithm is faster on large, sparse problems, and the dense algorithm is faster on dense or small problems. For more information, see the LinearSolver
option description and interior-point-convex quadprog Algorithm.
Example: [2,1;1,3]
Data Types: single
| double
Linear objective term, specified as a real vector. f
represents the linear term in the expression1/2*x'*H*x + f'*x
.
Example: [1;3;2]
Data Types: single
| double
Data Types: single
| double
Data Types: single
| double
Data Types: single
| double
Data Types: single
| double
Data Types: single
| double
Data Types: single
| double
Initial point, specified as a real vector. The length ofx0
is the number of rows or columns ofH.
x0
applies to the'trust-region-reflective'
algorithm when the problem has only bound constraints. x0
also applies to the'active-set'
algorithm.
Note
x0
is a required argument for the 'active-set'
algorithm.
If you do not specify x0
, quadprog
sets all components of x0
to a point in the interior of the box defined by the bounds. quadprog
ignoresx0
for the 'interior-point-convex'
algorithm and for the 'trust-region-reflective'
algorithm with equality constraints.
Example: [1;2;1]
Data Types: single
| double
Optimization options, specified as the output ofoptimoptions
or a structure such asoptimset
returns.
Some options are absent from theoptimoptions
display. These options appear in italics in the following table. For details, see View Optimization Options.
All Algorithms
Algorithm | Choose the algorithm:'interior-point-convex' (default)'trust-region-reflective''active-set'The'interior-point-convex' algorithm handles only convex problems. The'trust-region-reflective' algorithm handles problems with only bounds or only linear equality constraints, but not both. The'active-set' algorithm handles indefinite problems provided that the projection ofH onto the nullspace ofAeq is positive semidefinite. For details, see Choosing the Algorithm. |
---|---|
Diagnostics | Display diagnostic information about the function to be minimized or solved. The choices are'on' or 'off' (default). |
Display | Level of display (see Iterative Display):'off' or'none' displays no output.'final' displays only the final output (default).The'interior-point-convex' and'active-set' algorithms allow additional values:'iter' specifies an iterative display.'iter-detailed' specifies an iterative display with a detailed exit message.'final-detailed' displays only the final output with a detailed exit message. |
MaxIterations | Maximum number of iterations allowed; a nonnegative integer. For a'trust-region-reflective' equality-constrained problem, the default value is2*(numberOfVariables – numberOfEqualities).'active-set' has a default of 10*(numberOfVariables + numberOfConstraints).For all other algorithms and problems, the default value is 200.For optimset, the option name is MaxIter. See Current and Legacy Option Names. |
OptimalityTolerance | Termination tolerance on the first-order optimality; a nonnegative scalar.For a'trust-region-reflective' equality-constrained problem, the default value is1e-6.For a'trust-region-reflective' bound-constrained problem, the default value is100*eps, about2.2204e-14.For the'interior-point-convex' and'active-set' algorithms, the default value is 1e-8.See Tolerances and Stopping Criteria.For optimset, the option name is TolFun. See Current and Legacy Option Names. |
StepTolerance | Termination tolerance on x; a nonnegative scalar.For'trust-region-reflective', the default value is 100*eps, about2.2204e-14.For'interior-point-convex', the default value is 1e-12.For 'active-set', the default value is 1e-8.For optimset, the option name is TolX. See Current and Legacy Option Names. |
'trust-region-reflective'
Algorithm Only
FunctionTolerance | Termination tolerance on the function value; a nonnegative scalar. The default value depends on the problem type: bound-constrained problems use 100*eps, and linear equality-constrained problems use1e-6. See Tolerances and Stopping Criteria.For optimset, the option name is TolFun. See Current and Legacy Option Names. |
---|---|
HessianMultiplyFcn | Hessian multiply function, specified as a function handle. For large-scale structured problems, this function computes the Hessian matrix product H*Y without actually forming H. The function has the formW = hmfun(Hinfo,Y)whereHinfo (and potentially some additional parameters) contain the matrices used to compute H*Y.See Quadratic Minimization with Dense, Structured Hessian for an example that uses this option.Foroptimset, the option name isHessMult. See Current and Legacy Option Names. |
MaxPCGIter | Maximum number of PCG (preconditioned conjugate gradient) iterations; a positive scalar. The default ismax(1,floor(numberOfVariables/2)) for bound-constrained problems. For equality-constrained problems, quadprog ignoresMaxPCGIter and usesMaxIterations to limit the number of PCG iterations. For more information, see Preconditioned Conjugate Gradient Method. |
PrecondBandWidth | Upper bandwidth of the preconditioner for PCG; a nonnegative integer. By default,quadprog uses diagonal preconditioning (upper bandwidth 0). For some problems, increasing the bandwidth reduces the number of PCG iterations. SettingPrecondBandWidth toInf uses a direct factorization (Cholesky) rather than the conjugate gradients (CG). The direct factorization is computationally more expensive than CG, but produces a better quality step toward the solution. |
SubproblemAlgorithm | Determines how the iteration step is calculated. The default,'cg', takes a faster but less accurate step than 'factorization'. See trust-region-reflective quadprog Algorithm. |
TolPCG | Termination tolerance on the PCG iteration; a positive scalar. The default is0.1. |
TypicalX | Typical x values. The number of elements in TypicalX equals the number of elements in x0, the starting point. The default value isones(numberOfVariables,1).quadprog usesTypicalX internally for scaling.TypicalX has an effect only whenx has unbounded components, and when a TypicalX value for an unbounded component exceeds1. |
'interior-point-convex'
Algorithm Only
ConstraintTolerance | Tolerance on the constraint violation; a nonnegative scalar. The default is1e-8.Foroptimset, the option name isTolCon. See Current and Legacy Option Names. |
---|---|
LinearSolver | Type of internal linear solver in the algorithm:'auto' (default) — Use 'sparse' if theH matrix is sparse and'dense' otherwise.'sparse' — Use sparse linear algebra. See Sparse Matrices.'dense' — Use dense linear algebra. |
ScaleProblem | When true, perform internal scaling to improve problem conditioning and possibly obtain a faster or more accurate solution. The default is false.On some problems with poorly-scaled linear constraint (A or Aeq) or quadratic objective coefficient (H) matrices, the true setting can provide a noticeable speedup or improved solution accuracy. However, ScaleProblem=true can add a small time overhead to the solver, and in some cases can cause poor convergence. |
'active-set'
Algorithm Only
ConstraintTolerance | Tolerance on the constraint violation; a nonnegative scalar. The default value is1e-8.Foroptimset, the option name isTolCon. See Current and Legacy Option Names. |
---|---|
ObjectiveLimit | A tolerance (stopping criterion) that is a scalar. If the objective function value goes below ObjectiveLimit and the current point is feasible, the iterations halt because the problem is unbounded, presumably. The default value is-1e20. |
UseCodegenSolver | Indication to use the version of the software that runs on target hardware, specified as true or the default false. Use this option for testing code in the desktop environment prior to generating code. You can pass this option when generating code, but it has no effect on code generation. |
Single-Precision Code Generation
Algorithm | Must be 'active-set'. |
---|---|
ConstraintTolerance | Tolerance on the constraint violation, a positive scalar. The default value is 1e-4.For optimset, the option name is TolCon. See Current and Legacy Option Names. |
MaxIterations | Maximum number of iterations allowed, a nonnegative integer. The default value is 10*(nVar + mConstr), where nVar is the number of problem variables and mConstr is the number of constraints. |
ObjectiveLimit | A tolerance (stopping criterion) that is a scalar. If the objective function value goes below ObjectiveLimit and the current point is feasible, the iterations halt because the problem is unbounded, presumably. The default value is -1e20. |
OptimalityTolerance | Termination tolerance on the first-order optimality, a positive scalar. The default value is 1e-4. See First-Order Optimality Measure.For optimset, the name is TolFun. See Current and Legacy Option Names. |
StepTolerance | Termination tolerance on x, a positive scalar. The default value is 1e-4.For optimset, the option name is TolX. See Current and Legacy Option Names. |
UseCodegenSolver | Indication to use the version of the software that runs on target hardware, specified as true or the default false. Use this option for testing code in the desktop environment prior to generating code. You can pass this option when generating code, but it has no effect on code generation. |
Problem structure, specified as a structure with these fields:
H | Symmetric matrix in1/2*x'*H*x |
---|---|
f | Vector in linear termf'*x |
Aineq | Matrix in linear inequality constraintsAineq*x ≤ bineq |
bineq | Vector in linear inequality constraintsAineq*x ≤ bineq |
Aeq | Matrix in linear equality constraints Aeq*x = beq |
beq | Vector in linear equality constraints Aeq*x = beq |
lb | Vector of lower bounds |
ub | Vector of upper bounds |
x0 | Initial point forx |
solver | 'quadprog' |
options | Options created using optimoptions oroptimset |
The required fields are H
, f
,solver
, and options
. When solving,quadprog
ignores any fields inproblem
other than those listed.
Note
You cannot use warm start with the problem
argument.
Data Types: struct
Warm start object, specified as an object created using optimwarmstart. The warm start object contains the start point and options, and optional data for memory size in code generation. See Warm Start Best Practices.
Example: ws = optimwarmstart(x0,options)
Output Arguments
Solution, returned as a real vector. x
is the vector that minimizes 1/2*x'*H*x + f'*x
subject to all bounds and linear constraints. x
can be a local minimum for nonconvex problems. For convex problems, x
is a global minimum. For more information, see Local vs. Global Optima.
Solution warm start object, returned as aQuadprogWarmStart
object. The solution point iswsout.X
.
You can use wsout
as the input warm start object in a subsequent quadprog
call.
Objective function value at the solution, returned as a real scalar.fval
is the value of1/2*x'*H*x + f'*x
at the solutionx
.
Reason quadprog
stopped, returned as an integer described in this table.
All Algorithms | |
---|---|
1 | Function converged to the solution x. |
0 | Number of iterations exceededoptions.MaxIterations. |
-2 | Problem is infeasible. Or, for 'interior-point-convex', the step size was smaller thanoptions.StepTolerance, but constraints were not satisfied. |
-3 | Problem is unbounded. |
'interior-point-convex' Algorithm | |
2 | Step size was smaller thanoptions.StepTolerance, constraints were satisfied. |
-6 | Nonconvex problem detected. |
-8 | Unable to compute a step direction. |
'trust-region-reflective' Algorithm | |
4 | Local minimum found; minimum is not unique. |
3 | Change in the objective function value was smaller thanoptions.FunctionTolerance. |
-4 | Current search direction was not a direction of descent. No further progress could be made. |
'active-set' Algorithm | |
-6 | Nonconvex problem detected; projection of H onto the nullspace of Aeq is not positive semidefinite. |
Note
Occasionally, the 'active-set'
algorithm halts with exit flag 0
when the problem is, in fact, unbounded. Setting a higher iteration limit also results in exit flag0
.
Information about the optimization process, returned as a structure with these fields:
iterations | Number of iterations taken |
---|---|
algorithm | Optimization algorithm used |
cgiterations | Total number of PCG iterations ('trust-region-reflective' algorithm only) |
constrviolation | Maximum of constraint functions |
firstorderopt | Measure of first-order optimality |
linearsolver | Type of internal linear solver, 'dense' or'sparse' ('interior-point-convex' algorithm only) |
message | Exit message |
Lagrange multipliers at the solution, returned as a structure with these fields:
lower | Lower boundslb |
---|---|
upper | Upper boundsub |
ineqlin | Linear inequalities |
eqlin | Linear equalities |
For details, see Lagrange Multiplier Structures.
More About
The next few items list the possible enhanced exit messages fromquadprog
. Enhanced exit messages give a link for more information as the first sentence of the message.
The solver found a minimizing point that satisfies all bounds and linear constraints. Since the problem is convex, the minimizing point is a global minimum. For more information, see Local vs. Global Optima.
The solver stopped because the last step was too small. When the relative step size goes below the StepTolerance tolerance, then the iterations end. Sometimes, this means that the solver located the minimum. However, the first-order optimality measure was not less than the OptimalityTolerance, so it is possible that the result is inaccurate. All constraints were satisfied.
To proceed, try the following:
- Examine the first-order optimality measure in the
output
structure. If the first-order optimality measure is small, then it is likely that the returned solution is accurate. - Set the
StepTolerance
option to0
. Sometimes, this setting helps the solver proceed, though sometimes the solver remains stalled because of other issues. - Try a different algorithm. If the solver offers a choice of algorithms, sometimes a different algorithm can succeed.
- Try removing dependent constraints. This means ensure that none of the linear constraints are redundant.
quadprog
stopped because it appears to have found a direction that satisfies all constraints and causes the objective to decrease without bound.
To proceed,
- Ensure that you have finite bound for each component.
- Check the objective function to ensure that it is strictly convex (the quadratic matrix has strictly positive eigenvalues).
- See if the associated linear programming problem (the original problem without the quadratic term) has a finite solution.
The solver was unable to proceed because it could not compute a direction leading to a minimum. It is likely that this trouble is due to redundant linear constraints or tolerances that are too small.
To proceed,
- Check your linear constraint matrices for redundancy. Try to identify and remove redundant linear constraints.
- Ensure that your
FunctionTolerance
,OptimalityTolerance
, andConstraintTolerance
options are above1e-14
, and are preferably above1e-12
. See Tolerances and Stopping Criteria.
The solver found the solution during the presolve phase. This means the bounds, linear constraints, and f
(linear objective coefficient) immediately lead to a solution. For more information, see Presolve/Postsolve.
During presolve, the solver found that the problem has an inconsistent formulation. Inconsistent means not all constraints can be satisfied at a single point x. For more information, see Presolve/Postsolve.
During presolve, the solver found a feasible direction where the objective function decreases without bound. For more information, see Presolve/Postsolve.
The solver converged to a point that does not satisfy all constraints to within the constraint tolerance called ConstraintTolerance. The reason the solver stopped is that the last step was too small. When the relative step size goes below the StepTolerance tolerance, then the iterations end.
There is only one feasible point. The number of independent linear equality constraints is the same as the number of variables in the problem.
The solver stopped because the first-order optimality measure is less than the OptimalityTolerance
tolerance.
The first-order optimality measure is the infinity norm of the projected gradient. The projection is onto the null space of the linear equality matrixAeq
.
The solver stopped at a point of zero curvature that is a local minimum. There are other feasible points that have the same objective function value.
There are directions of zero or negative curvature along which the objective function decreases indefinitely. Therefore, for any target value, there are feasible points with objective value smaller than the target. Check whether you included enough constraints in the problem, such as bounds on all variables.
The solver stopped because the relative change in function value was below theFunctionTolerance
tolerance. To check solution quality, see Local Minimum Possible.
The solver stopped because the relative change in function value was below the square root of the FunctionTolerance
tolerance, and the change of function values in the previous iterations is decreasing by less than a factor of 3.5. This criterion stops the solver when the difference of objective function values is relatively small, but does not decrease to zero quickly enough. To check solution quality, see Local Minimum Possible.
The next few items contain definitions for terms in the quadprog
exit messages.
Generally, a tolerance is a threshold which, if crossed, stops the iterations of a solver. For more information on tolerances, see Tolerances and Stopping Criteria.
A quadratic program is convex if, from any feasible point, there is no feasible direction with negative curvature. A convex problem has only one local minimum, which is also the global minimum.
The feasible directions from a feasible point x are those vectors v such that for small enough positive_a_, x + av is feasible.
A feasible point is one satisfying all the constraints.
StepTolerance
is a tolerance for the size of the last step, meaning the size of the change in location wherefsolve
was evaluated.
The tolerance called OptimalityTolerance
relates to the first-order optimality measure. Iterations end when the first-order optimality measure is less than OptimalityTolerance
.
For constrained problems, the first-order optimality measure is the maximum of the following two quantities:
For unconstrained problems, the first-order optimality measure is the maximum of the absolute value of the components of the gradient vector (also known as the infinity norm).
The first-order optimality measure should be zero at a minimizing point.
For more information, including definitions of all the variables in these equations, see First-Order Optimality Measure.
For unconstrained problems, the first-order optimality measure is the maximum of the absolute value of the components of the gradient vector (also known as the infinity norm of the gradient). This should be zero at a minimizing point.
For problems with bounds, the first-order optimality measure is the maximum over_i_ of |vi*gi|. Here gi is the_i_th component of the gradient, x is the current point, and
If xi is at a bound,vi is zero. If_xi_ is not at a bound, then at a minimizing point the gradient gi should be zero. Therefore the first-order optimality measure should be zero at a minimizing point.
For more information, see First-Order Optimality Measure.
The constraint tolerance calledConstraintTolerance
is the maximum of the values of all constraint functions at the current point.
ConstraintTolerance
operates differently from other tolerances. If ConstraintTolerance
is not satisfied (i.e., if the magnitude of the constraint function exceeds ConstraintTolerance
), the solver attempts to continue, unless it is halted for another reason. A solver does not halt simply because ConstraintTolerance
is satisfied.
The dual feasibility rd is defined in terms of the KKT conditions for the problem. The relative dual feasibility stopping condition is
rd ≤_ρ_OptimalityTolerance, | (1) |
---|
where ρ is a scale factor.
For more information, see Predictor-Corrector.
The KKT conditions state that at an optimum x, there are Lagrange multipliers λ¯ineq and _λ_eq such that
The variables A¯, λ¯ineq, and b¯ include bounds as part of the linear inequalities.
The dual feasibility _r_d is the absolute value of rd=Hx+c+AeqTλeq+A¯Tλ¯ineq.
The scale factor ρ is
The norm ‖⋅‖ is the maximum absolute value of the elements in the expression.
The complementarity measure is defined in terms of theKKT conditions for the problem. At an optimum x, there are Lagrange multipliers λ¯ineq and _λ_eq such that
The variables A¯, λ¯ineq, and b¯ include bounds as part of the linear inequalities.
The complementarity measure is :
For more information, see Predictor-Corrector.
The total relative error is defined in terms of the KKT conditions for the problem. The total relative error stopping condition holds when the Merit Function φ satisfies
φ ≥ max(OptimalityTolerance,105_φ_min). | (2) |
---|
When this stopping condition holds, the solver determines that the quadratic program is infeasible.
The KKT conditions state that at an optimum x, there are Lagrange multipliers λ¯ineq and _λ_eq such that
The variables A¯, λ¯ineq, and b¯ include bounds as part of the linear inequalities.
The merit function φ is
The terms in the definition of φ are:
The expression φ_min means the minimum of_φ seen in all iterations.
Presolve is a set of algorithms that simplify a linear or quadratic programming problem. The algorithms look for simple inconsistencies such as inconsistent bounds and linear constraints. They also look for redundant bounds and linear inequalities. For more information, see Presolve/Postsolve.
The internally-calculated search direction does not decrease the objective function value. Perhaps the problem is poorly scaled or has an ill-conditioned matrix (H
for quadprog
, C
for lsqlin
). For suggestions on how to proceed, see When the Solver Fails or Local Minimum Possible.
Algorithms
The 'interior-point-convex'
algorithm attempts to follow a path that is strictly inside the constraints. It uses a presolve module to remove redundancies and to simplify the problem by solving for components that are straightforward.
The algorithm has different implementations for a sparse Hessian matrixH and for a dense matrix. Generally, the sparse implementation is faster on large, sparse problems, and the dense implementation is faster on dense or small problems. For more information, see interior-point-convex quadprog Algorithm.
The 'trust-region-reflective'
algorithm is a subspace trust-region method based on the interior-reflective Newton method described in[1]. Each iteration involves the approximate solution of a large linear system using the method of preconditioned conjugate gradients (PCG). For more information, seetrust-region-reflective quadprog Algorithm.
A warm start object maintains a list of active constraints from the previous solved problem. The solver carries over as much active constraint information as possible to solve the current problem. If the previous problem is too different from the current one, no active set information is reused. In this case, the solver effectively executes a cold start in order to rebuild the list of active constraints.
Alternative Functionality
App
The Optimize Live Editor task provides a visual interface for quadprog
.
References
[1] Coleman, T. F., and Y. Li. “A Reflective Newton Method for Minimizing a Quadratic Function Subject to Bounds on Some of the Variables.” SIAM Journal on Optimization. Vol. 6, Number 4, 1996, pp. 1040–1058.
[2] Gill, P. E., W. Murray, and M. H. Wright.Practical Optimization. London: Academic Press, 1981.
[3] Gould, N., and P. L. Toint. “Preprocessing for quadratic programming.” Mathematical Programming. Series B, Vol. 100, 2004, pp. 95–132.
Extended Capabilities
Usage notes and limitations:
quadprog
supports code generation using either the codegen (MATLAB Coder) function or the MATLAB® Coder™ app. You must have a MATLAB Coder license to generate code.- The target hardware must support standard double-precision floating-point computations or standard single-precision floating-point computations.
- Code generation targets do not use the same math kernel libraries as MATLAB solvers. Therefore, code generation solutions can vary from solver solutions, especially for poorly conditioned problems.
- To test your code in MATLAB before generating code, set the
UseCodegenSolver
option totrue
. That way, the solver uses the same code that code generation creates. quadprog
does not support the problem argument for code generation.
[x,fval] = quadprog(problem) % Not supported- All
quadprog
input matrices such asA
,Aeq
,lb
, andub
must be full, not sparse. You can convert sparse matrices to full by using the full function. - The
lb
andub
arguments must have the same number of entries as the number of columns inH
or must be empty[]
. - If your target hardware does not support infinite bounds, use optim.coder.infbound.
- For advanced code optimization involving embedded processors, you also need an Embedded Coder® license.
- You must include options for
quadprog
and specify them usingoptimoptions. The options must include theAlgorithm
option, set to'active-set'
.
options = optimoptions("quadprog",Algorithm="active-set");
[x,fval,exitflag] = quadprog(H,f,A,b,Aeq,beq,lb,ub,x0,options); - Code generation supports these options:
Algorithm
— Must be'active-set'
ConstraintTolerance
MaxIterations
ObjectiveLimit
OptimalityTolerance
StepTolerance
UseCodegenSolver
- Generated code has limited error checking for options. The recommended way to update an option is to use
optimoptions
, not dot notation.
opts = optimoptions('quadprog','Algorithm','active-set');
opts = optimoptions(opts,'MaxIterations',1e4); % Recommended
opts.MaxIterations = 1e4; % Not recommended - Do not load options from a file. Doing so can cause code generation to fail. Instead, create options in your code.
- If you specify an option that is not supported, the option is typically ignored during code generation. For reliable results, specify only supported options.
For an example, see Generate Code for quadprog.
Version History
Introduced before R2006a
Set the new UseCodegenSolver
option to true
to havequadprog
use the same version of the software that code generation creates. This option allows you to check the behavior of the solver before you generate code or deploy the code to hardware. For solvers that support single-precision code generation, the generated code can also support single-precision hardware. You can include the option when you generate code; the option has no effect in code generation, but leaving the option in saves you the step of removing it. Even though the generated code is identical to the MATLAB code, results can differ slightly because linked math libraries can differ.
The "interior-point-convex"
algorithm of the quadprog solver gains the ScaleProblem
option, which can speed the solution of some poorly-scaled problems. For details, see thequadprog
options argument description.