Choosing the Algorithm - MATLAB & Simulink (original) (raw)

fmincon Algorithms

fmincon has five algorithm options:

Use optimoptions to set theAlgorithm option at the command line.

Reasoning Behind the Recommendations

For descriptions of the algorithms, see Constrained Nonlinear Optimization Algorithms.

fsolve Algorithms

fsolve has three algorithms:

Use optimoptions to set theAlgorithm option at the command line.

Recommendations
Use the 'trust-region-dogleg' algorithm first.For help if fsolve fails, seeWhen the Solver Fails or When the Solver Might Have Succeeded.To solve equations again if you have a Jacobian multiply function, or want to tune the internal algorithm (see Trust-Region Algorithm in fsolve options), try'trust-region'.Try timing all the algorithms, including'levenberg-marquardt', to find the algorithm that works best on your problem.

Reasoning Behind the Recommendations

For descriptions of the algorithms, see Equation Solving Algorithms.

fminunc Algorithms

fminunc has two algorithms:

Use optimoptions to set theAlgorithm option at the command line.

Recommendations
If your objective function includes a gradient, use'Algorithm' = 'trust-region', and set the SpecifyObjectiveGradient option to true.Otherwise, use'Algorithm' = 'quasi-newton'.For help if the minimization fails, see When the Solver Fails orWhen the Solver Might Have Succeeded.

For descriptions of the algorithms, see Unconstrained Nonlinear Optimization Algorithms.

Least Squares Algorithms

lsqlin

lsqlin has three algorithms:

Use optimoptions to set theAlgorithm option at the command line.

Recommendations
Try 'interior-point' first.TipFor better performance when your input matrixC has a large fraction of nonzero entries, specifyC as an ordinary double matrix. Similarly, for better performance whenC has relatively few nonzero entries, specify C as sparse. For data type details, see Sparse Matrices. You can also set the internal linear algebra type by using the'LinearSolver' option.If you have no constraints or only bound constraints, and want higher accuracy, more speed, or want to use a Jacobian Multiply Function with Linear Least Squares, try'trust-region-reflective'.If you have a large number of linear constraints and not a large number of variables, try'active-set'. This is the only algorithm that cannot use sparse data; see Sparsity in Optimization Algorithms.For help if the minimization fails, seeWhen the Solver Fails or When the Solver Might Have Succeeded.See Potential Inaccuracy with Interior-Point Algorithms.

For descriptions of the algorithms, see Least-Squares (Model Fitting) Algorithms.

lsqcurvefit and lsqnonlin

lsqcurvefit and lsqnonlin have three algorithms:

Use optimoptions to set theAlgorithm option at the command line.

Recommendations
For problems with bound constraints only, try'trust-region-reflective' or'levenberg-marquardt' first.If your bound-constrained problem is underdetermined (fewer equations than dimensions), try 'levenberg-marquardt' first.If your problem has linear or nonlinear constraints, use'interior-point'.All of the algorithms can use sparse data; see Sparsity in Optimization Algorithms. For help if the minimization fails, see When the Solver Fails or When the Solver Might Have Succeeded.

For descriptions of the algorithms, see Least-Squares (Model Fitting) Algorithms.

Linear Programming Algorithms

linprog has four algorithms:

Use optimoptions to set theAlgorithm option at the command line.

Reasoning Behind the Recommendations

For descriptions of the algorithms, see Linear Programming Algorithms.

Mixed-Integer Linear Programming Algorithms

intlinprog has two algorithms:

Use optimoptions to set theAlgorithm option at the command line.

Reasoning Behind the Recommendations

For descriptions of the algorithms, see Mixed-Integer Linear Programming (MILP) Algorithms.

Quadratic Programming Algorithms

quadprog has three algorithms:

Use optimoptions to set theAlgorithm option at the command line.

Recommendations
If you have a convex problem, or if you don't know whether your problem is convex, use'interior-point-convex'.TipFor better performance when your Hessian matrixH has a large fraction of nonzero entries, specifyH as an ordinary double matrix. Similarly, for better performance whenH has relatively few nonzero entries, specify H as sparse. For data type details, see Sparse Matrices. You can also set the internal linear algebra type by using the'LinearSolver' option.If you have a nonconvex problem with only bounds, or with only linear equalities, use'trust-region-reflective'.If you have a positive semidefinite problem with a large number of linear constraints and not a large number of variables, try'active-set'. This algorithm cannot use sparse data; see Sparsity in Optimization Algorithms.For help if the minimization fails, see When the Solver Fails orWhen the Solver Might Have Succeeded.See Potential Inaccuracy with Interior-Point Algorithms.

For descriptions of the algorithms, see Quadratic Programming Algorithms.

Sparsity in Optimization Algorithms

Some optimization algorithms accept sparse data and do not need to store, nor operate on, full matrices. These algorithms use sparse linear algebra for computations whenever possible. Furthermore, the algorithms either preserve sparsity, such as a sparse Cholesky decomposition, or do not generate matrices, such as a conjugate gradient method. These algorithms can be preferable when you have large linear constraint matrices A or Aeq, or when you have a large number of problem variables.

In contrast, some algorithms internally create full matrices and use dense linear algebra. If a problem is sufficiently large, full matrices take up a significant amount of memory, and the dense linear algebra may require a long time to execute. For problems that do not have many variables and whose linear constraint matrices are not large, these algorithms can sometimes be faster than algorithms that use sparsity.

This table shows which solvers and algorithms can use sparse data and which cannot.

Potential Inaccuracy with Interior-Point Algorithms

Interior-point algorithms in fmincon,quadprog, lsqlin, andlinprog have many good characteristics, such as low memory usage and the ability to solve large problems quickly. However, their solutions can be slightly less accurate than those from other algorithms. The reason for this potential inaccuracy is that the (internally calculated) barrier function keeps iterates away from inequality constraint boundaries.

For most practical purposes, this inaccuracy is usually quite small.

To reduce the inaccuracy, try to:

For example, try to minimize the function x when bounded below by 0. Using the fmincon defaultinterior-point algorithm:

options = optimoptions(@fmincon,'Algorithm','interior-point','Display','off'); x = fmincon(@(x)x,1,[],[],[],[],0,[],[],options)

Using the fmincon sqp algorithm:

options.Algorithm = 'sqp'; x2 = fmincon(@(x)x,1,[],[],[],[],0,[],[],options)

Similarly, solve the same problem using the linprog interior-point-legacy algorithm:

opts = optimoptions(@linprog,'Display','off','Algorithm','interior-point-legacy'); x = linprog(1,[],[],[],[],0,[],1,opts)

Using the linprog dual-simplex algorithm:

opts.Algorithm = 'dual-simplex'; x2 = linprog(1,[],[],[],[],0,[],1,opts)

In these cases, the interior-point algorithms are less accurate, but the answers are quite close to the correct answer.

See Also

Topics