Supply Derivatives in Problem-Based Workflow - MATLAB & Simulink (original) (raw)

Why Include Derivatives?

This example shows how to include derivative information in nonlinear problem-based optimization when automatic derivatives do not apply, or when you want to include a Hessian, which is not provided using automatic differentiation. Including gradients or a Hessian in an optimization can give the following benefits:

Automatic Differentiation Applied to Optimization

Starting in R2020b, the solve function can use automatic differentiation for supported functions in order to supply gradients to solvers. These automatic derivatives apply only to gradients (first derivatives), not Hessians (second derivatives).

Beginning in R2022b, automatic differentiation applies whether or not you use fcn2optimexpr to create an objective or constraint function. Previously, usingfcn2optimexpr caused automatic differentiation to not apply. If you need to use fcn2optimexpr, this example shows how to include derivative information.

The way to use derivatives in problem-based optimization without automatic differentiation is to convert your problem using prob2struct, and then edit the resulting objective and constraint functions. This example shows a hybrid approach where automatic differentiation supplies derivatives for part of the objective function.

Create Optimization Problem

With control variables x and y, use the objective function

Include the constraint that the sum of squares of x andy is no more than 4.

fun2 is not based on supported functions for optimization expressions; see Supported Operations for Optimization Variables and Expressions. Therefore, to includefun2 in an optimization problem, you must convert it to an optimization expression using fcn2optimexpr.

To use AD on the supported functions, set up the problem without the unsupported function fun2, and include fun2 later.

prob = optimproblem; x = optimvar('x'); y = optimvar('y'); fun1 = 100*(y - x^2)^2 + (1 - x)^2; prob.Objective = fun1; prob.Constraints.cons = x^2 + y^2 <= 4;

Convert Problem to Solver-Based Form

To include derivatives of fun2, first convert the problem withoutfun2 to a structure using prob2struct.

problem = prob2struct(prob,... 'ObjectiveFunctionName','generatedObjectiveBefore');

During the conversion, prob2struct creates function files that represent the objective and nonlinear constraint functions. By default, these functions have the names 'generatedObjective.m' and'generatedConstraints.m'. The objective function file withoutfun2 is 'generatedObjectiveBefore.m'.

The generated objective and constraint functions include gradients.

Calculate Derivatives and Keep Track of Variables

Calculate the derivatives associated with fun2. If you have a Symbolic Math Toolbox™ license, you can use the gradient (Symbolic Math Toolbox) or jacobian (Symbolic Math Toolbox) function to help compute the derivatives. See Calculate Gradients and Hessians Using Symbolic Math Toolbox. To validate a manually-generated gradient function by comparing to finite-difference approximations, use checkGradients.

The solver-based approach has one control variable. Each optimization variable (x and y, in this example) is a portion of the control variable. You can find the mapping between optimization variables and the single control variable in the generated objective function file'generatedObjectiveBefore.m'. The beginning of the file contains these lines of code or similar lines.

%% Variable indices. xidx = 1; yidx = 2;

%% Map solver-based variables to problem-based. x = inputVariables(xidx); y = inputVariables(yidx);

In this code, the control variable has the nameinputVariables.

Alternatively, you can find the mapping by using varindex.

idx = varindex(prob); disp(idx.x)

The full objective function includes fun2:

fun2 = besselj(1,x^2 + y^2);

Using standard calculus, calculate gradfun2, the gradient offun2.

Edit the Objective and Constraint Files

Edit 'generatedObjectiveBefore.m' to includefun2.

%% Compute objective function. arg1 = (y - x.^2); arg2 = 100; arg3 = arg1.^2; arg4 = (1 - x); obj = ((arg2 .* arg3) + arg4.^2);

ssq = x^2 + y^2; fun2 = besselj(1,ssq); obj = obj + fun2;

Include the calculated gradients in the objective function file by editing'generatedObjectiveBefore.m'. If you have a software version that does not perform the gradient calculation, include all of these lines. If your software performs the gradient calculation, include only the bold lines, which calculate the gradient offun2.

%% Compute objective gradient. if nargout > 1 arg5 = 1; arg6 = zeros([2, 1]); arg6(xidx,:) = (-(arg5.2.(arg4(:)))) + ((-((arg5.*arg2(:)).2.(arg1(:)))).2.(x(:))); arg6(yidx,:) = ((arg5.*arg2(:)).2.(arg1(:))); grad = arg6(:);

arg7 = besselj(0,ssq); arg8 = arg7 - fun2/ssq; gfun = [2xarg8;... 2yarg8];

grad = grad + gfun; end

Recall that the nonlinear constraint is x^2 + y^2 <= 4. The gradient of this constraint function is 2*[x;y]. If your software calculates the constraint gradient and includes it in the generated constraint file, then you do not need to do anything more. If your software does not calculate the constraint gradient, then include the gradient of the nonlinear constraint by editing'generatedConstraints.m' to include these lines.

%% Insert gradient calculation here. % If you include a gradient, notify the solver by setting the % SpecifyConstraintGradient option to true. if nargout > 2 cineqGrad = 2[x;y];* ceqGrad = []; end

Run Problem With and Without Gradients

Run the problem using both the solver-based and problem-based (no gradient) methods to see the differences. To run the solver-based problem using derivative information, create appropriate options in the problem structure.

options = optimoptions('fmincon','SpecifyObjectiveGradient',true,... 'SpecifyConstraintGradient',true); problem.options = options;

Nonlinear problems require a nonempty x0 field in the problem structure.

x0 = [1;1]; problem.x0 = x0;

Call fmincon on the problem structure.

[xsolver,fvalsolver,exitflagsolver,outputsolver] = fmincon(problem)

Local minimum found that satisfies the constraints.

Optimization completed because the objective function is non-decreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance.

xsolver =

1.2494
1.5617

fvalsolver =

-0.0038

exitflagsolver =

 1

outputsolver =

struct with fields:

     iterations: 15
      funcCount: 32
constrviolation: 0
       stepsize: 1.5569e-06
      algorithm: 'interior-point'
  firstorderopt: 2.2058e-08
   cgiterations: 7
        message: '↵Local minimum found that satisfies the constraints.↵↵Optimization completed because the objective function is non-decreasing in ↵feasible directions, to within the value of the optimality tolerance,↵and constraints are satisfied to within the value of the constraint tolerance.↵↵<stopping criteria details>↵↵Optimization completed: The relative first-order optimality measure, 2.125635e-08,↵is less than options.OptimalityTolerance = 1.000000e-06, and the relative maximum constraint↵violation, 0.000000e+00, is less than options.ConstraintTolerance = 1.000000e-06.↵↵'
   bestfeasible: [1×1 struct]

Compare this solution with the one obtained from solve without derivative information.

init.x = x0(1); init.y = x0(2); f2 = @(x,y)besselj(1,x^2 + y^2); fun2 = fcn2optimexpr(f2,x,y); prob.Objective = prob.Objective + fun2; [xproblem,fvalproblem,exitflagproblem,outputproblem] = solve(prob,init,... "ConstraintDerivative","finite-differences",... "ObjectiveDerivative","finite-differences")

Solving problem using fmincon.

Local minimum found that satisfies the constraints.

Optimization completed because the objective function is non-decreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance.

xproblem =

struct with fields:

x: 1.2494
y: 1.5617

fvalproblem =

-0.0038

exitflagproblem =

OptimalSolution

outputproblem =

struct with fields:

          iterations: 15
           funcCount: 64
     constrviolation: 0
            stepsize: 1.5571e-06
           algorithm: 'interior-point'
       firstorderopt: 6.0139e-07
        cgiterations: 7
             message: '↵Local minimum found that satisfies the constraints.↵↵Optimization completed because the objective function is non-decreasing in ↵feasible directions, to within the value of the optimality tolerance,↵and constraints are satisfied to within the value of the constraint tolerance.↵↵<stopping criteria details>↵↵Optimization completed: The relative first-order optimality measure, 5.795368e-07,↵is less than options.OptimalityTolerance = 1.000000e-06, and the relative maximum constraint↵violation, 0.000000e+00, is less than options.ConstraintTolerance = 1.000000e-06.↵↵'
        bestfeasible: [1×1 struct]
 objectivederivative: "finite-differences"
constraintderivative: "closed-form"
              solver: 'fmincon'

Both solutions report the same function value to display precision, and both require the same number of iterations. However, the solution with gradients requires only 32 function evaluations, compared to 64 for the solution without gradients.

Include Hessian

To include a Hessian, you must use prob2struct, even if all your functions are supported for optimization expressions. This example shows how to use a Hessian for the fmincon interior-point algorithm. The fminunc trust-region algorithm and the fmincon trust-region-reflective algorithm use a different syntax; see Hessian for fminunc trust-region or fmincon trust-region-reflective algorithms.

As described in Hessian for fmincon interior-point algorithm, the Hessian is the Hessian of the Lagrangian.

∇xx2L(x,λ)=∇2f(x)+∑λg,i∇2gi(x)+∑λh,i∇2hi(x). (1)

Include this Hessian by creating a function file to compute the Hessian, and creating aHessianFcn option for fmincon to use the Hessian. To create the Hessian in this case, create the second derivatives of the objective and nonlinear constraints separately.

The second derivative matrix of the objective function for this example is somewhat complicated. Its objective function listing hessianfun(x) was created by Symbolic Math Toolbox using the same approach as described in Calculate Gradients and Hessians Using Symbolic Math Toolbox.

function hf = hessfun(in1) %HESSFUN % HF = HESSFUN(IN1)

% This function was generated by the Symbolic Math Toolbox version 8.6. % 10-Aug-2020 10:50:44

x = in1(1,:); y = in1(2,:); t2 = x.^2; t4 = y.^2; t6 = x.*4.0e+2; t3 = t2.^2; t5 = t4.^2; t7 = -t4; t8 = -t6; t9 = t2+t4; t10 = t2.*t4.*2.0; t11 = besselj(0,t9); t12 = besselj(1,t9); t13 = t2+t7; t14 = 1.0./t9; t16 = t3+t5+t10-2.0; t15 = t14.^2; t17 = t11.*t14.*x.*y.*4.0; t19 = t11.*t13.*t14.*2.0; t18 = -t17; t20 = t12.*t15.*t16.*x.*y.*4.0; t21 = -t20; t22 = t8+t18+t21; hf = reshape([t2.*1.2e+3-t19-y.4.0e+2-t12.t15.... (t2.-3.0+t4+t2.*t5.*2.0+t3.*t4.*4.0+t2.^3.*2.0).*2.0+2.0,... t22,t22,... t19-t12.t15.(t2-t4.*3.0+t2.*t5.*4.0+... t3.*t4.*2.0+t4.^3.*2.0).*2.0+2.0e+2],[2,2]);

In contrast, the Hessian of the nonlinear inequality constraint is simple; it is twice the identity matrix.

Create the Hessian of the Lagrangian as a function handle.

H = @(x,lam)(hessianfun(x) + hessianc*lam.ineqnonlin);

Create options to use this Hessian.

problem.options.HessianFcn = H;

Solve the problem and display the number of iterations and number of function evaluations. The solution is approximately the same as before.

[xhess,fvalhess,exitflaghess,outputhess] = fmincon(problem); disp(outputhess.iterations) disp(outputhess.funcCount)

Local minimum found that satisfies the constraints.

Optimization completed because the objective function is non-decreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance.

8

10

This time, fmincon takes only 8 iterations compared to 15, and only 10 function evaluations compared to 32. In summary, providing an analytic Hessian calculation can improve the efficiency of the solution process, but developing a function to calculate the Hessian can be difficult.

See Also

prob2struct | varindex | fcn2optimexpr | checkGradients

Topics