solve - Solve optimization problem or equation problem - MATLAB (original) (raw)
Solve optimization problem or equation problem
Syntax
Description
Use solve
to find the solution of an optimization problem or equation problem.
[sol](#shared-sol) = solve([prob](#mw%5F433feacd-de58-4e3b-b124-a7a0f588f4fc%5Fsep%5Fmw%5F0c2aa71b-f426-4eb2-a081-bcfc19dc9d54))
solves the optimization problem or equation problem prob
.
[sol](#shared-sol) = solve([prob](#mw%5F433feacd-de58-4e3b-b124-a7a0f588f4fc%5Fsep%5Fmw%5F0c2aa71b-f426-4eb2-a081-bcfc19dc9d54),[x0](#mw%5F433feacd-de58-4e3b-b124-a7a0f588f4fc%5Fsep%5Fmw%5Fc1a24646-7c73-4b73-b0a0-a7af7c3d9101))
solves prob
starting from the point or set of valuesx0
.
[sol](#shared-sol) = solve([prob](#mw%5F433feacd-de58-4e3b-b124-a7a0f588f4fc%5Fsep%5Fmw%5F0c2aa71b-f426-4eb2-a081-bcfc19dc9d54),[x0](#mw%5F433feacd-de58-4e3b-b124-a7a0f588f4fc%5Fsep%5Fmw%5Fc1a24646-7c73-4b73-b0a0-a7af7c3d9101),[ms](#mw%5F75df92e3-a2b8-4cc7-abbc-aadbc05ab7b6))
solves prob
using the ms
multiple-start solver. Use this syntax to search for a better solution than you obtain when not using the ms
argument.
[sol](#shared-sol) = solve(___,[Name,Value](#namevaluepairarguments))
modifies the solution process using one or more name-value pair arguments in addition to the input arguments in previous syntaxes.
[[sol](#shared-sol),[fval](#mw%5F5fd26105-6956-41fc-b196-07f79d07af9e)] = solve(___)
also returns the objective function value at the solution using any of the input arguments in previous syntaxes.
[[sol](#shared-sol),[fval](#mw%5F5fd26105-6956-41fc-b196-07f79d07af9e),[exitflag](#mw%5F433feacd-de58-4e3b-b124-a7a0f588f4fc%5Fsep%5Fshared-exitflag),[output](#d126e164664),[lambda](#d126e164913)] = solve(___)
also returns an exit flag describing the exit condition, anoutput
structure containing additional information about the solution process, and, for non-integer optimization problems, a Lagrange multiplier structure.
Examples
Solve a linear programming problem defined by an optimization problem.
x = optimvar('x'); y = optimvar('y'); prob = optimproblem; prob.Objective = -x - y/3; prob.Constraints.cons1 = x + y <= 2; prob.Constraints.cons2 = x + y/4 <= 1; prob.Constraints.cons3 = x - y <= 2; prob.Constraints.cons4 = x/4 + y >= -1; prob.Constraints.cons5 = x + y >= 1; prob.Constraints.cons6 = -x + y <= 2;
sol = solve(prob)
Solving problem using linprog.
Optimal solution found.
sol = struct with fields: x: 0.6667 y: 1.3333
Find a minimum of the peaks
function, which is included in MATLAB®, in the region x2+y2≤4. To do so, create optimization variables x
and y
.
x = optimvar('x'); y = optimvar('y');
Create an optimization problem having peaks
as the objective function.
prob = optimproblem("Objective",peaks(x,y));
Include the constraint as an inequality in the optimization variables.
prob.Constraints = x^2 + y^2 <= 4;
Set the initial point for x
to 1 and y
to –1, and solve the problem.
x0.x = 1; x0.y = -1; sol = solve(prob,x0)
Solving problem using fmincon.
Local minimum found that satisfies the constraints.
Optimization completed because the objective function is non-decreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance.
sol = struct with fields: x: 0.2283 y: -1.6255
Unsupported Functions Require fcn2optimexpr
If your objective or nonlinear constraint functions are not entirely composed of elementary functions, you must convert the functions to optimization expressions using fcn2optimexpr. See Convert Nonlinear Function to Optimization Expression and Supported Operations for Optimization Variables and Expressions.
To convert the present example:
convpeaks = fcn2optimexpr(@peaks,x,y); prob.Objective = convpeaks; sol2 = solve(prob,x0)
Solving problem using fmincon.
Local minimum found that satisfies the constraints.
Optimization completed because the objective function is non-decreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance.
sol2 = struct with fields: x: 0.2283 y: -1.6255
Copyright 2018–2020 The MathWorks, Inc.
Compare the number of steps to solve an integer programming problem both with and without an initial feasible point. The problem has eight integer variables and four linear equality constraints, and all variables are restricted to be positive.
prob = optimproblem; x = optimvar('x',8,1,'LowerBound',0,'Type','integer');
Create four linear equality constraints and include them in the problem.
Aeq = [22 13 26 33 21 3 14 26 39 16 22 28 26 30 23 24 18 14 29 27 30 38 26 26 41 26 28 36 18 38 16 26]; beq = [ 7872 10466 11322 12058]; cons = Aeq*x == beq; prob.Constraints.cons = cons;
Create an objective function and include it in the problem.
f = [2 10 13 17 7 5 7 3]; prob.Objective = f*x;
Solve the problem without using an initial point, and examine the display to see the number of branch-and-bound nodes.
[x1,fval1,exitflag1,output1] = solve(prob);
Solving problem using intlinprog. Running HiGHS 1.7.1: Copyright (c) 2024 HiGHS under MIT licence terms Coefficient ranges: Matrix [3e+00, 4e+01] Cost [2e+00, 2e+01] Bound [0e+00, 0e+00] RHS [8e+03, 1e+04] Presolving model 4 rows, 8 cols, 32 nonzeros 0s 4 rows, 8 cols, 27 nonzeros 0s Objective function is integral with scale 1
Solving MIP model with: 4 rows 8 cols (0 binary, 8 integer, 0 implied int., 0 continuous) 27 nonzeros
Nodes | B&B Tree | Objective Bounds | Dynamic Constraints | Work
Proc. InQueue | Leaves Expl. | BestBound BestSol Gap | Cuts InLp Confl. | LpIters Time
0 0 0 0.00% 0 inf inf 0 0 0 0 0.0s
0 0 0 0.00% 1554.047531 inf inf 0 0 4 4 0.0s
T 20753 210 8189 98.04% 1783.696925 1854 3.79% 30 8 9884 19222 2.8s
Solving report Status Optimal Primal bound 1854 Dual bound 1854 Gap 0% (tolerance: 0.01%) Solution status feasible 1854 (objective) 0 (bound viol.) 9.63673585375e-14 (int. viol.) 0 (row viol.) Timing 2.86 (total) 0.00 (presolve) 0.00 (postsolve) Nodes 21163 LP iterations 19608 (total) 223 (strong br.) 76 (separation) 1018 (heuristics)
Optimal solution found.
Intlinprog stopped because the objective value is within a gap tolerance of the optimal value, options.AbsoluteGapTolerance = 1e-06. The intcon variables are integer within tolerance, options.ConstraintTolerance = 1e-06.
For comparison, find the solution using an initial feasible point.
x0.x = [8 62 23 103 53 84 46 34]'; [x2,fval2,exitflag2,output2] = solve(prob,x0);
Solving problem using intlinprog. Running HiGHS 1.7.1: Copyright (c) 2024 HiGHS under MIT licence terms Coefficient ranges: Matrix [3e+00, 4e+01] Cost [2e+00, 2e+01] Bound [0e+00, 0e+00] RHS [8e+03, 1e+04] Assessing feasibility of MIP using primal feasibility and integrality tolerance of 1e-06 Solution has num max sum Col infeasibilities 0 0 0 Integer infeasibilities 0 0 0 Row infeasibilities 0 0 0 Row residuals 0 0 0 Presolving model 4 rows, 8 cols, 32 nonzeros 0s 4 rows, 8 cols, 27 nonzeros 0s
MIP start solution is feasible, objective value is 3901 Objective function is integral with scale 1
Solving MIP model with: 4 rows 8 cols (0 binary, 8 integer, 0 implied int., 0 continuous) 27 nonzeros
Nodes | B&B Tree | Objective Bounds | Dynamic Constraints | Work
Proc. InQueue | Leaves Expl. | BestBound BestSol Gap | Cuts InLp Confl. | LpIters Time
0 0 0 0.00% 0 3901 100.00% 0 0 0 0 0.0s
0 0 0 0.00% 1554.047531 3901 60.16% 0 0 4 4 0.0s
T 6266 708 2644 73.61% 1662.791423 3301 49.63% 20 6 9746 10699 1.4s T 9340 919 3970 80.72% 1692.410008 2687 37.01% 29 6 9995 16120 2.1s T 21750 192 9514 96.83% 1791.542628 1854 3.37% 20 6 9984 40278 5.2s
Solving report Status Optimal Primal bound 1854 Dual bound 1854 Gap 0% (tolerance: 0.01%) Solution status feasible 1854 (objective) 0 (bound viol.) 1.42108547152e-13 (int. viol.) 0 (row viol.) Timing 5.25 (total) 0.00 (presolve) 0.00 (postsolve) Nodes 22163 LP iterations 40863 (total) 538 (strong br.) 64 (separation) 2782 (heuristics)
Optimal solution found.
Intlinprog stopped because the objective value is within a gap tolerance of the optimal value, options.AbsoluteGapTolerance = 1e-06. The intcon variables are integer within tolerance, options.ConstraintTolerance = 1e-06.
fprintf('Without an initial point, solve took %d steps.\nWith an initial point, solve took %d steps.',output1.numnodes,output2.numnodes)
Without an initial point, solve took 21163 steps. With an initial point, solve took 22163 steps.
Giving an initial point does not always improve the problem. For this problem, using an initial point saves time and computational steps. However, for some problems, an initial point can cause solve
to take more steps.
For some solvers, you can pass the objective and constraint function values, if any, to solve
in the x0
argument. This can save time in the solver. Pass a vector of OptimizationValues
objects. Create this vector using the optimvalues function.
The solvers that can use the objective function values are:
ga
gamultiobj
paretosearch
surrogateopt
The solvers that can use nonlinear constraint function values are:
paretosearch
surrogateopt
For example, minimize the peaks
function using surrogateopt
, starting with values from a grid of initial points. Create a grid from -10 to 10 in the x
variable, and –5/2
to 5/2
in the y
variable with spacing 1/2. Compute the objective function values at the initial points.
x = optimvar("x",LowerBound=-10,UpperBound=10); y = optimvar("y",LowerBound=-5/2,UpperBound=5/2); prob = optimproblem("Objective",peaks(x,y)); xval = -10:10; yval = (-5:5)/2; [x0x,x0y] = meshgrid(xval,yval); peaksvals = peaks(x0x,x0y);
Pass the values in the x0
argument by using optimvalues
. This saves time for solve
, as solve
does not need to compute the values. Pass the values as row vectors.
x0 = optimvalues(prob,'x',x0x(:)','y',x0y(:)',... "Objective",peaksvals(:)');
Solve the problem using surrogateopt
with the initial values.
[sol,fval,eflag,output] = solve(prob,x0,Solver="surrogateopt")
Solving problem using surrogateopt.
surrogateopt stopped because it exceeded the function evaluation limit set by 'options.MaxFunctionEvaluations'.
sol = struct with fields: x: 0.2279 y: -1.6258
eflag = SolverLimitExceeded
output = struct with fields: elapsedtime: 29.8029 funccount: 200 constrviolation: 0 ineq: [1×1 struct] rngstate: [1×1 struct] message: 'surrogateopt stopped because it exceeded the function evaluation limit set by ↵'options.MaxFunctionEvaluations'.' solver: 'surrogateopt'
Find a local minimum of the peaks
function on the range -5≤x,y≤5 starting from the point [–1,2]
.
x = optimvar("x",LowerBound=-5,UpperBound=5); y = optimvar("y",LowerBound=-5,UpperBound=5); x0.x = -1; x0.y = 2; prob = optimproblem(Objective=peaks(x,y)); opts = optimoptions("fmincon",Display="none"); [sol,fval] = solve(prob,x0,Options=opts)
sol = struct with fields: x: -3.3867 y: 3.6341
Try to find a better solution by using the GlobalSearch
solver. This solver runs fmincon
multiple times, which potentially yields a better solution.
ms = GlobalSearch; [sol2,fval2] = solve(prob,x0,ms)
Solving problem using GlobalSearch.
GlobalSearch stopped because it analyzed all the trial points.
All 15 local solver runs converged with a positive local solver exit flag.
sol2 = struct with fields: x: 0.2283 y: -1.6255
GlobalSearch
finds a solution with a better (lower) objective function value. The exit message shows that fmincon
, the local solver, runs 15 times. The returned solution has an objective function value of about –6.5511, which is lower than the value at the first solution, 1.1224e–07.
Solve the problem
minx(-3x1-2x2-x3)subjectto{x3binaryx1,x2≥0x1+x2+x3≤74x1+2x2+x3=12
without showing iterative display.
x = optimvar('x',2,1,'LowerBound',0); x3 = optimvar('x3','Type','integer','LowerBound',0,'UpperBound',1); prob = optimproblem; prob.Objective = -3x(1) - 2x(2) - x3; prob.Constraints.cons1 = x(1) + x(2) + x3 <= 7; prob.Constraints.cons2 = 4x(1) + 2x(2) + x3 == 12;
options = optimoptions('intlinprog','Display','off');
sol = solve(prob,'Options',options)
sol = struct with fields: x: [2×1 double] x3: 0
Examine the solution.
Force solve
to use intlinprog
as the solver for a linear programming problem.
x = optimvar('x'); y = optimvar('y'); prob = optimproblem; prob.Objective = -x - y/3; prob.Constraints.cons1 = x + y <= 2; prob.Constraints.cons2 = x + y/4 <= 1; prob.Constraints.cons3 = x - y <= 2; prob.Constraints.cons4 = x/4 + y >= -1; prob.Constraints.cons5 = x + y >= 1; prob.Constraints.cons6 = -x + y <= 2;
sol = solve(prob,'Solver', 'intlinprog')
Solving problem using intlinprog. Running HiGHS 1.7.1: Copyright (c) 2024 HiGHS under MIT licence terms Coefficient ranges: Matrix [2e-01, 1e+00] Cost [3e-01, 1e+00] Bound [0e+00, 0e+00] RHS [1e+00, 2e+00] Presolving model 6 rows, 2 cols, 12 nonzeros 0s 4 rows, 2 cols, 8 nonzeros 0s 4 rows, 2 cols, 8 nonzeros 0s Presolve : Reductions: rows 4(-2); columns 2(-0); elements 8(-4) Solving the presolved LP Using EKK dual simplex solver - serial Iteration Objective Infeasibilities num(sum) 0 -1.3333333333e+03 Ph1: 3(4499); Du: 2(1.33333) 0s 3 -1.1111111111e+00 Pr: 0(0) 0s Solving the original LP from the solution after postsolve Model status : Optimal Simplex iterations: 3 Objective value : -1.1111111111e+00 HiGHS run time : 0.00
Optimal solution found.
No integer variables specified. Intlinprog solved the linear problem.
sol = struct with fields: x: 0.6667 y: 1.3333
Solve the mixed-integer linear programming problem described in Solve Integer Programming Problem with Nondefault Options and examine all of the output data.
x = optimvar('x',2,1,'LowerBound',0); x3 = optimvar('x3','Type','integer','LowerBound',0,'UpperBound',1); prob = optimproblem; prob.Objective = -3x(1) - 2x(2) - x3; prob.Constraints.cons1 = x(1) + x(2) + x3 <= 7; prob.Constraints.cons2 = 4x(1) + 2x(2) + x3 == 12;
[sol,fval,exitflag,output] = solve(prob)
Solving problem using intlinprog. Running HiGHS 1.7.1: Copyright (c) 2024 HiGHS under MIT licence terms Coefficient ranges: Matrix [1e+00, 4e+00] Cost [1e+00, 3e+00] Bound [1e+00, 1e+00] RHS [7e+00, 1e+01] Presolving model 2 rows, 3 cols, 6 nonzeros 0s 0 rows, 0 cols, 0 nonzeros 0s Presolve: Optimal
Solving report Status Optimal Primal bound -12 Dual bound -12 Gap 0% (tolerance: 0.01%) Solution status feasible -12 (objective) 0 (bound viol.) 0 (int. viol.) 0 (row viol.) Timing 0.00 (total) 0.00 (presolve) 0.00 (postsolve) Nodes 0 LP iterations 0 (total) 0 (strong br.) 0 (separation) 0 (heuristics)
Optimal solution found.
Intlinprog stopped at the root node because the objective value is within a gap tolerance of the optimal value, options.AbsoluteGapTolerance = 1e-06. The intcon variables are integer within tolerance, options.ConstraintTolerance = 1e-06.
sol = struct with fields: x: [2×1 double] x3: 0
exitflag = OptimalSolution
output = struct with fields: relativegap: 0 absolutegap: 0 numfeaspoints: 1 numnodes: 0 constrviolation: 0 algorithm: 'highs' message: 'Optimal solution found.↵↵Intlinprog stopped at the root node because the objective value is within a gap tolerance of the optimal value, options.AbsoluteGapTolerance = 1e-06. The intcon variables are integer within tolerance, options.ConstraintTolerance = 1e-06.' solver: 'intlinprog'
For a problem without any integer constraints, you can also obtain a nonempty Lagrange multiplier structure as the fifth output.
Create and solve an optimization problem using named index variables. The problem is to maximize the profit-weighted flow of fruit to various airports, subject to constraints on the weighted flows.
rng(0) % For reproducibility p = optimproblem('ObjectiveSense', 'maximize'); flow = optimvar('flow', ... {'apples', 'oranges', 'bananas', 'berries'}, {'NYC', 'BOS', 'LAX'}, ... 'LowerBound',0,'Type','integer'); p.Objective = sum(sum(rand(4,3).*flow)); p.Constraints.NYC = rand(1,4)*flow(:,'NYC') <= 10; p.Constraints.BOS = rand(1,4)*flow(:,'BOS') <= 12; p.Constraints.LAX = rand(1,4)*flow(:,'LAX') <= 35; sol = solve(p);
Solving problem using intlinprog. Running HiGHS 1.7.1: Copyright (c) 2024 HiGHS under MIT licence terms Coefficient ranges: Matrix [4e-02, 1e+00] Cost [1e-01, 1e+00] Bound [0e+00, 0e+00] RHS [1e+01, 4e+01] Presolving model 3 rows, 12 cols, 12 nonzeros 0s 3 rows, 12 cols, 12 nonzeros 0s
Solving MIP model with: 3 rows 12 cols (0 binary, 12 integer, 0 implied int., 0 continuous) 12 nonzeros
Nodes | B&B Tree | Objective Bounds | Dynamic Constraints | Work
Proc. InQueue | Leaves Expl. | BestBound BestSol Gap | Cuts InLp Confl. | LpIters Time
0 0 0 0.00% 1160.150059 -inf inf 0 0 0 0 0.0s
S 0 0 0 0.00% 1160.150059 1027.233133 12.94% 0 0 0 0 0.0s
Solving report Status Optimal Primal bound 1027.23313332 Dual bound 1027.23313332 Gap 0% (tolerance: 0.01%) Solution status feasible 1027.23313332 (objective) 0 (bound viol.) 0 (int. viol.) 0 (row viol.) Timing 0.00 (total) 0.00 (presolve) 0.00 (postsolve) Nodes 1 LP iterations 3 (total) 0 (strong br.) 0 (separation) 0 (heuristics)
Optimal solution found.
Intlinprog stopped at the root node because the objective value is within a gap tolerance of the optimal value, options.AbsoluteGapTolerance = 1e-06. The intcon variables are integer within tolerance, options.ConstraintTolerance = 1e-06.
Find the optimal flow of oranges and berries to New York and Los Angeles.
[idxFruit,idxAirports] = findindex(flow, {'oranges','berries'}, {'NYC', 'LAX'})
orangeBerries = sol.flow(idxFruit, idxAirports)
orangeBerries = 2×2
0 980
70 0
This display means that no oranges are going to NYC
, 70 berries are going to NYC
, 980 oranges are going to LAX
, and no berries are going to LAX
.
List the optimal flow of the following:
Fruit Airports
----- --------
Berries NYC
Apples BOS
Oranges LAX
idx = findindex(flow, {'berries', 'apples', 'oranges'}, {'NYC', 'BOS', 'LAX'})
optimalFlow = sol.flow(idx)
optimalFlow = 1×3
70 28 980
This display means that 70 berries are going to NYC
, 28 apples are going to BOS
, and 980 oranges are going to LAX
.
To solve the nonlinear system of equations
exp(-exp(-(x1+x2)))=x2(1+x12)x1cos(x2)+x2sin(x1)=12
using the problem-based approach, first define x
as a two-element optimization variable.
Create the first equation as an optimization equality expression.
eq1 = exp(-exp(-(x(1) + x(2)))) == x(2)*(1 + x(1)^2);
Similarly, create the second equation as an optimization equality expression.
eq2 = x(1)*cos(x(2)) + x(2)*sin(x(1)) == 1/2;
Create an equation problem, and place the equations in the problem.
prob = eqnproblem; prob.Equations.eq1 = eq1; prob.Equations.eq2 = eq2;
Review the problem.
EquationProblem :
Solve for:
x
eq1:
exp((-exp((-(x(1) + x(2)))))) == (x(2) .* (1 + x(1).^2))
eq2:
((x(1) .* cos(x(2))) + (x(2) .* sin(x(1)))) == 0.5
Solve the problem starting from the point [0,0]
. For the problem-based approach, specify the initial point as a structure, with the variable names as the fields of the structure. For this problem, there is only one variable, x
.
x0.x = [0 0]; [sol,fval,exitflag] = solve(prob,x0)
Solving problem using fsolve.
Equation solved.
fsolve completed because the vector of function values is near zero as measured by the value of the function tolerance, and the problem appears regular as measured by the gradient.
sol = struct with fields: x: [2×1 double]
fval = struct with fields: eq1: -2.4070e-07 eq2: -3.8255e-08
exitflag = EquationSolved
View the solution point.
Input Arguments
Optimization problem or equation problem, specified as an OptimizationProblem object or an EquationProblem object. Create an optimization problem by using optimproblem; create an equation problem by using eqnproblem.
Warning
The problem-based approach does not support complex values in the following: an objective function, nonlinear equalities, and nonlinear inequalities. If a function calculation has a complex value, even as an intermediate value, the final result might be incorrect.
Example: prob = optimproblem; prob.Objective = obj; prob.Constraints.cons1 = cons1;
Example: prob = eqnproblem; prob.Equations = eqs;
Initial point, specified as a structure with field names equal to the variable names in prob
.
For some Global Optimization Toolbox solvers, x0
can be a vector of OptimizationValues objects representing multiple initial points. Create the points using the optimvalues function. These solvers are:
- ga (Global Optimization Toolbox), gamultiobj (Global Optimization Toolbox), paretosearch (Global Optimization Toolbox) and particleswarm (Global Optimization Toolbox). These solvers accept multiple starting points as members of the initial population.
- MultiStart (Global Optimization Toolbox). This solver accepts multiple initial points for a local solver such as
fmincon
. - surrogateopt (Global Optimization Toolbox). This solver accepts multiple initial points to help create an initial surrogate.
For an example using x0
with named index variables, see Create Initial Point for Optimization with Named Index Variables.
Example: If prob
has variables named x
and y
: x0.x = [3,2,17]; x0.y = [pi/3,2*pi/3]
.
Data Types: struct
Multiple start solver, specified as a MultiStart (Global Optimization Toolbox) object or a GlobalSearch (Global Optimization Toolbox) object. Createms
using the MultiStart
orGlobalSearch
commands.
Currently, GlobalSearch
supports only thefmincon
local solver, andMultiStart
supports only thefmincon
, fminunc
, andlsqnonlin
local solvers.
Example: ms = MultiStart;
Example: ms = GlobalSearch(FunctionTolerance=1e-4);
Name-Value Arguments
Specify optional pairs of arguments asName1=Value1,...,NameN=ValueN
, where Name
is the argument name and Value
is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.
Before R2021a, use commas to separate each name and value, and enclose Name
in quotes.
Example: solve(prob,'Options',opts)
Minimum number of start points for MultiStart (Global Optimization Toolbox), specified as a positive integer. This argument applies only when you callsolve
using the ms argument. solve
uses all of the values inx0 as start points. IfMinNumStartPoints
is greater than the number of values in x0
, then solve
generates more start points uniformly at random within the problem bounds. If a component is unbounded, solve
generates points using the default artificial bounds forMultiStart
.
Example: solve(prob,x0,ms,MinNumStartPoints=50)
Data Types: double
Optimization options, specified as an object created by optimoptions or an options structure such as created by optimset.
Internally, the solve
function calls a relevant solver as detailed in the 'solver' argument reference. Ensure thatoptions
is compatible with the solver. For example, intlinprog
does not allow options to be a structure, and lsqnonneg
does not allow options to be an object.
For suggestions on options settings to improve anintlinprog
solution or the speed of a solution, see Tuning Integer Linear Programming. For linprog
, the default 'dual-simplex'
algorithm is generally memory-efficient and speedy. Occasionally, linprog
solves a large problem faster when the Algorithm
option is 'interior-point'
. For suggestions on options settings to improve a nonlinear problem's solution, see Optimization Options in Common Use: Tuning and Troubleshooting and Improve Results.
Example: options = optimoptions('intlinprog','Display','none')
Optimization solver, specified as the name of a listed solver. For optimization problems, this table contains the available solvers for each problem type, including solvers from Global Optimization Toolbox. Details for equation problems appear below the optimization solver details.
For converting nonlinear problems with integer constraints usingprob2struct
, the resulting problem structure can depend on the chosen solver. If you do not have a Global Optimization Toolbox license, you must specify the solver. See Integer Constraints in Nonlinear Problem-Based Optimization.
The default solver for each optimization problem type is listed here.
Problem Type | Default Solver |
---|---|
Linear Programming (LP) | linprog |
Mixed-Integer Linear Programming (MILP) | intlinprog |
Quadratic Programming (QP) | quadprog |
Second-Order Cone Programming (SOCP) | coneprog |
Linear Least Squares | lsqlin |
Nonlinear Least Squares | lsqnonlin |
Nonlinear Programming (NLP) | fminunc for problems with no constraints, otherwise fmincon |
Mixed-Integer Nonlinear Programming (MINLP) | ga (Global Optimization Toolbox) |
Multiobjective | gamultiobj (Global Optimization Toolbox) |
In this table, means the solver is available for the problem type,x means the solver is not available.
Note
If you choose lsqcurvefit
as the solver for a least-squares problem, solve
uses lsqnonlin
. Thelsqcurvefit
and lsqnonlin
solvers are identical for solve
.
Caution
For maximization problems (prob.ObjectiveSense
is"max"
or "maximize"
), do not specify a least-squares solver (one with a name beginning lsq
). If you do,solve
throws an error, because these solvers cannot maximize.
For equation solving, this table contains the available solvers for each problem type. In the table,
- * indicates the default solver for the problem type.
- Y indicates an available solver.
- N indicates an unavailable solver.
Supported Solvers for Equations
Equation Type | lsqlin | lsqnonneg | fzero | fsolve | lsqnonlin |
---|---|---|---|---|---|
Linear | * | N | Y (scalar only) | Y | Y |
Linear plus bounds | * | Y | N | N | Y |
Scalar nonlinear | N | N | * | Y | Y |
Nonlinear system | N | N | N | * | Y |
Nonlinear system plus bounds | N | N | N | N | * |
Example: 'intlinprog'
Data Types: char
| string
Indication to use automatic differentiation (AD) for nonlinear objective function, specified as 'auto'
(use AD if possible), 'auto-forward'
(use forward AD if possible), 'auto-reverse'
(use reverse AD if possible), or 'finite-differences'
(do not use AD). Choices including auto
cause the underlying solver to use gradient information when solving the problem provided that the objective function is supported, as described in Supported Operations for Optimization Variables and Expressions. For an example, see Effect of Automatic Differentiation in Problem-Based Optimization.
Solvers choose the following type of AD by default:
- For a general nonlinear objective function,
fmincon
defaults to reverse AD for the objective function.fmincon
defaults to reverse AD for the nonlinear constraint function when the number of nonlinear constraints is less than the number of variables. Otherwise,fmincon
defaults to forward AD for the nonlinear constraint function. - For a general nonlinear objective function,
fminunc
defaults to reverse AD. - For a least-squares objective function,
fmincon
andfminunc
default to forward AD for the objective function. For the definition of a problem-based least-squares objective function, see Write Objective Function for Problem-Based Least Squares. lsqnonlin
defaults to forward AD when the number of elements in the objective vector is greater than or equal to the number of variables. Otherwise,lsqnonlin
defaults to reverse AD.fsolve
defaults to forward AD when the number of equations is greater than or equal to the number of variables. Otherwise,fsolve
defaults to reverse AD.
Example: 'finite-differences'
Data Types: char
| string
Indication to use automatic differentiation (AD) for nonlinear constraint functions, specified as 'auto'
(use AD if possible), 'auto-forward'
(use forward AD if possible), 'auto-reverse'
(use reverse AD if possible), or 'finite-differences'
(do not use AD). Choices including auto
cause the underlying solver to use gradient information when solving the problem provided that the constraint functions are supported, as described in Supported Operations for Optimization Variables and Expressions. For an example, see Effect of Automatic Differentiation in Problem-Based Optimization.
Solvers choose the following type of AD by default:
- For a general nonlinear objective function,
fmincon
defaults to reverse AD for the objective function.fmincon
defaults to reverse AD for the nonlinear constraint function when the number of nonlinear constraints is less than the number of variables. Otherwise,fmincon
defaults to forward AD for the nonlinear constraint function. - For a general nonlinear objective function,
fminunc
defaults to reverse AD. - For a least-squares objective function,
fmincon
andfminunc
default to forward AD for the objective function. For the definition of a problem-based least-squares objective function, see Write Objective Function for Problem-Based Least Squares. lsqnonlin
defaults to forward AD when the number of elements in the objective vector is greater than or equal to the number of variables. Otherwise,lsqnonlin
defaults to reverse AD.fsolve
defaults to forward AD when the number of equations is greater than or equal to the number of variables. Otherwise,fsolve
defaults to reverse AD.
Example: 'finite-differences'
Data Types: char
| string
Indication to use automatic differentiation (AD) for nonlinear constraint functions, specified as 'auto'
(use AD if possible), 'auto-forward'
(use forward AD if possible), 'auto-reverse'
(use reverse AD if possible), or 'finite-differences'
(do not use AD). Choices including auto
cause the underlying solver to use gradient information when solving the problem provided that the equation functions are supported, as described in Supported Operations for Optimization Variables and Expressions. For an example, see Effect of Automatic Differentiation in Problem-Based Optimization.
Solvers choose the following type of AD by default:
- For a general nonlinear objective function,
fmincon
defaults to reverse AD for the objective function.fmincon
defaults to reverse AD for the nonlinear constraint function when the number of nonlinear constraints is less than the number of variables. Otherwise,fmincon
defaults to forward AD for the nonlinear constraint function. - For a general nonlinear objective function,
fminunc
defaults to reverse AD. - For a least-squares objective function,
fmincon
andfminunc
default to forward AD for the objective function. For the definition of a problem-based least-squares objective function, see Write Objective Function for Problem-Based Least Squares. lsqnonlin
defaults to forward AD when the number of elements in the objective vector is greater than or equal to the number of variables. Otherwise,lsqnonlin
defaults to reverse AD.fsolve
defaults to forward AD when the number of equations is greater than or equal to the number of variables. Otherwise,fsolve
defaults to reverse AD.
Example: 'finite-differences'
Data Types: char
| string
Output Arguments
Solution, returned as a structure or an OptimizationValues vector. sol
is anOptimizationValues
vector when the problem is multiobjective. For single-objective problems, the fields of the returned structure are the names of the optimization variables in the problem. See optimvar.
Objective function value at the solution, returned as one of the following:
Problem Type | Returned Value(s) |
---|---|
Optimize scalar objective function_f_(x) | Real number_f_(sol) |
Least squares | Real number, the sum of squares of the residuals at the solution |
Solve equation | If prob.Equations is a single entry: Real vector of function values at the solution, meaning the left side minus the right side of the equations |
If prob.Equations has multiple named fields: Structure with same names asprob.Equations, where each field value is the left side minus the right side of the named equations | |
Multiobjective | Matrix with one row for each objective function component, and one column for each solution point. |
Tip
If you neglect to ask for fval
for an objective defined as an optimization expression or equation expression, you can calculate it using
fval = evaluate(prob.Objective,sol)
If the objective is defined as a structure with only one field,
fval = evaluate(prob.Objective.ObjectiveName,sol)
If the objective is a structure with multiple fields, write a loop.
fnames = fields(prob.Equations); for i = 1:length(fnames) fval.(fnames{i}) = evaluate(prob.Equations.(fnames{i}),sol); end
Information about the optimization process, returned as a structure. The output structure contains the fields in the relevant underlying solver output field, depending on which solver solve
called:
'fminbnd'
output'fmincon'
output'fminunc'
output'fminsearch'
output'fsolve'
output'fzero'
output'intlinprog'
output'linprog'
output'lsqcurvefit'
or'lsqnonlin'
output'lsqlin'
output'lsqnonneg'
output'quadprog'
output'ga'
output (Global Optimization Toolbox)'gamultiobj'
output (Global Optimization Toolbox)'paretosearch'
output (Global Optimization Toolbox)'particleswarm'
output (Global Optimization Toolbox)'patternsearch'
output (Global Optimization Toolbox)'simulannealbnd'
output (Global Optimization Toolbox)'surrogateopt'
output (Global Optimization Toolbox)'MultiStart'
and'GlobalSearch'
return the output structure from the local solver. In addition, the output structure contains the following fields:globalSolver
— Either'MultiStart'
or'GlobalSearch'
.objectiveDerivative
— Takes the values described at the end of this section.constraintDerivative
— Takes the values described at the end of this section, or"auto"
whenprob
has no nonlinear constraint.solver
— The local solver, such as'fmincon'
.local
— Structure containing extra information about the optimization.
*sol
— Local solutions, returned as a vector of OptimizationValues objects.
*x0
— Initial points for the local solver, returned as a cell array.
*exitflag
— Exit flags of local solutions, returned as an integer vector.
*output
— Structure array, with one row for each local solution. Each row is the local output structure corresponding to one local solution.
solve
includes the additional field Solver
in the output
structure to identify the solver used, such as'intlinprog'
.
When Solver
is a nonlinear Optimization Toolbox™ solver, solve
includes one or two extra fields describing the derivative estimation type. The objectivederivative
and, if appropriate, constraintderivative
fields can take the following values:
"reverse-AD"
for reverse automatic differentiation"forward-AD"
for forward automatic differentiation"finite-differences"
for finite difference estimation"closed-form"
for linear or quadratic functions
For details, see Automatic Differentiation Background.
Lagrange multipliers at the solution, returned as a structure.
Note
solve
does not return lambda
for equation-solving problems.
For the intlinprog
and fminunc
solvers,lambda
is empty, []
. For the other solvers,lambda
has these fields:
Variables
– Contains fields for each problem variable. Each problem variable name is a structure with two fields:Lower
– Lagrange multipliers associated with the variableLowerBound
property, returned as an array of the same size as the variable. Nonzero entries mean that the solution is at the lower bound. These multipliers are in the structurelambda.Variables._`variablename`_.Lower
.Upper
– Lagrange multipliers associated with the variableUpperBound
property, returned as an array of the same size as the variable. Nonzero entries mean that the solution is at the upper bound. These multipliers are in the structurelambda.Variables._`variablename`_.Upper
.
Constraints
– Contains a field for each problem constraint. Each problem constraint is in a structure whose name is the constraint name, and whose value is a numeric array of the same size as the constraint. Nonzero entries mean that the constraint is active at the solution. These multipliers are in the structurelambda.Constraints._`constraintname`_
.
Note
Elements of a constraint array all have the same comparison (<=
,==
, or>=
) and are all of the same type (linear, quadratic, or nonlinear).
Algorithms
Internally, the solve function solves optimization problems by calling a solver. For the default solver for the problem and supported solvers for the problem, see the solvers function. You can override the default by using the 'solver' name-value pair argument when callingsolve
.
Before solve
can call a solver, the problems must be converted to solver form, either by solve
or some other associated functions or objects. This conversion entails, for example, linear constraints having a matrix representation rather than an optimization variable expression.
The first step in the algorithm occurs as you place optimization expressions into the problem. An OptimizationProblem object has an internal list of the variables used in its expressions. Each variable has a linear index in the expression, and a size. Therefore, the problem variables have an implied matrix form. The prob2struct function performs the conversion from problem form to solver form. For an example, see Convert Problem to Structure.
For nonlinear optimization problems, solve
uses automatic differentiation to compute the gradients of the objective function and nonlinear constraint functions. These derivatives apply when the objective and constraint functions are composed of Supported Operations for Optimization Variables and Expressions. When automatic differentiation does not apply, solvers estimate derivatives using finite differences. For details of automatic differentiation, see Automatic Differentiation Background. You can control howsolve
uses automatic differentiation with the ObjectiveDerivative name-value argument.
For the algorithm thatintlinprog
uses to solve MILP problems, see Legacy intlinprog Algorithm. For the algorithms that linprog
uses to solve linear programming problems, see Linear Programming Algorithms. For the algorithms that quadprog
uses to solve quadratic programming problems, see Quadratic Programming Algorithms. For linear or nonlinear least-squares solver algorithms, see Least-Squares (Model Fitting) Algorithms. For nonlinear solver algorithms, see Unconstrained Nonlinear Optimization Algorithms andConstrained Nonlinear Optimization Algorithms. For Global Optimization Toolbox solver algorithms, see Global Optimization Toolbox documentation.
For nonlinear equation solving, solve
internally represents each equation as the difference between the left and right sides. Then solve
attempts to minimize the sum of squares of the equation components. For the algorithms for solving nonlinear systems of equations, see Equation Solving Algorithms. When the problem also has bounds, solve
calls lsqnonlin
to minimize the sum of squares of equation components. See Least-Squares (Model Fitting) Algorithms.
Automatic differentiation (AD) applies to the solve andprob2struct functions under the following conditions:
- The objective and constraint functions are supported, as described in Supported Operations for Optimization Variables and Expressions. They do not require use of the fcn2optimexpr function.
- The solver called by
solve
is fmincon, fminunc, fsolve, or lsqnonlin. - For optimization problems, the
'ObjectiveDerivative'
and'ConstraintDerivative'
name-value pair arguments forsolve
orprob2struct
are set to'auto'
(default),'auto-forward'
, or'auto-reverse'
. - For equation problems, the
'EquationDerivative'
option is set to'auto'
(default),'auto-forward'
, or'auto-reverse'
.
When AD Applies | All Constraint Functions Supported | One or More Constraints Not Supported |
---|---|---|
Objective Function Supported | AD used for objective and constraints | AD used for objective only |
Objective Function Not Supported | AD used for constraints only | AD not used |
Note
For linear or quadratic objective or constraint functions, applicable solvers always use explicit function gradients. These gradients are not produced using AD. See Closed Form.
When these conditions are not satisfied, solve
estimates gradients by finite differences, and prob2struct
does not create gradients in its generated function files.
Solvers choose the following type of AD by default:
- For a general nonlinear objective function,
fmincon
defaults to reverse AD for the objective function.fmincon
defaults to reverse AD for the nonlinear constraint function when the number of nonlinear constraints is less than the number of variables. Otherwise,fmincon
defaults to forward AD for the nonlinear constraint function. - For a general nonlinear objective function,
fminunc
defaults to reverse AD. - For a least-squares objective function,
fmincon
andfminunc
default to forward AD for the objective function. For the definition of a problem-based least-squares objective function, see Write Objective Function for Problem-Based Least Squares. lsqnonlin
defaults to forward AD when the number of elements in the objective vector is greater than or equal to the number of variables. Otherwise,lsqnonlin
defaults to reverse AD.fsolve
defaults to forward AD when the number of equations is greater than or equal to the number of variables. Otherwise,fsolve
defaults to reverse AD.
Note
To use automatic derivatives in a problem converted by prob2struct
, pass options specifying these derivatives.
options = optimoptions('fmincon','SpecifyObjectiveGradient',true,... 'SpecifyConstraintGradient',true); problem.options = options;
Currently, AD works only for first derivatives; it does not apply to second or higher derivatives. So, for example, if you want to use an analytic Hessian to speed your optimization, you cannot use solve
directly, and must instead use the approach described in Supply Derivatives in Problem-Based Workflow.
Extended Capabilities
solve
estimates derivatives in parallel for nonlinear solvers when the UseParallel
option for the solver istrue
. For example,
options = optimoptions('fminunc','UseParallel',true); [sol,fval] = solve(prob,x0,'Options',options)
solve
does not use parallel derivative estimation when all objective and nonlinear constraint functions consist only of supported operations, as described in Supported Operations for Optimization Variables and Expressions. In this case,solve
uses automatic differentiation for calculating derivatives. See Automatic Differentiation.
You can override automatic differentiation and use finite difference estimates in parallel by setting the 'ObjectiveDerivative' and 'ConstraintDerivative' arguments to'finite-differences'
.
When you specify a Global Optimization Toolbox solver that support parallel computation (ga (Global Optimization Toolbox), particleswarm (Global Optimization Toolbox), patternsearch (Global Optimization Toolbox), and surrogateopt (Global Optimization Toolbox)), solve
compute in parallel when the UseParallel
option for the solver is true
. For example,
options = optimoptions("patternsearch","UseParallel",true); [sol,fval] = solve(prob,x0,"Options",options,"Solver","patternsearch")
Version History
Introduced in R2017b
To choose options or the underlying solver for solve
, use name-value pairs. For example,
sol = solve(prob,'options',opts,'solver','quadprog');
The previous syntaxes were not as flexible, standard, or extensible as name-value pairs.