dljacobian - Jacobian matrix deep learning operation - MATLAB (original) (raw)
Jacobian matrix deep learning operation
Since R2024b
Syntax
Description
The Jacobian deep learning operation returns the Jacobian matrix for neural network and model function outputs with respect to the specified input data and operation dimension.
[jac](#mw%5F3d1041a0-11dd-4716-9061-d88d3ab2e976) = dljacobian([u](#mw%5F63468a76-42ee-46f9-9e51-7edb1c992602),[x](#mw%5F6945eec4-2567-48ac-bee7-b20959053159),[dim](#mw%5Ff39b2bba-2296-46ab-9ad4-4295a0cc20f7%5Fsep%5Fmw%5F0b104bed-154d-4196-9863-3f92d131e34b))
returns the Jacobian matrix for the neural network outputs u
with respect to the data x
for the specified operation dimension.
[jac](#mw%5F3d1041a0-11dd-4716-9061-d88d3ab2e976) = dljacobian([u](#mw%5F63468a76-42ee-46f9-9e51-7edb1c992602),[x](#mw%5F6945eec4-2567-48ac-bee7-b20959053159),[dim](#mw%5Ff39b2bba-2296-46ab-9ad4-4295a0cc20f7%5Fsep%5Fmw%5F0b104bed-154d-4196-9863-3f92d131e34b),EnableHigherDerivatives=[tf](#mw%5Ff39b2bba-2296-46ab-9ad4-4295a0cc20f7%5Fsep%5Fmw%5F2cd7ab1d-ffc3-4f5f-863c-a43eacad66c3))
also specifies whether to enable higher derivatives by tracing the backward pass.
Examples
Evaluate Jacobian of Deep Learning Data
Create a neural network.
inputSize = [16 16 3]; numOutputChannels = 5;
layers = [ imageInputLayer(inputSize) convolution2dLayer(3,64) reluLayer fullyConnectedLayer(numOutputChannels) softmaxLayer];
net = dlnetwork(layers);
Load the training data. For the purposes of this example, generate some random data.
numObservations = 128; X = rand([inputSize numObservations]); X = dlarray(X,"SSCB");
T = rand([numOutputChannels numObservations]); T = dlarray(T,"CB");
Define a model loss function that takes the network and data as input and returns the loss, gradients of the loss with respect to the learnable parameters, and the Jacobian of the predictions with respect to the input data.
function [loss,gradients,jac] = modelLoss(net,X,T)
Y = forward(net,X); loss = l1loss(Y,T);
X = stripdims(X); Y = stripdims(Y);
jac = dljacobian(Y,X,1); gradients = dlgradient(loss,net.Learnables);
end
Evaluate the model loss function using the dlfeval
function.
[loss,gradients,jac] = dlfeval(@modelLoss,net,X,T);
View the size of the Jacobian.
Input Arguments
u
— Input
traced dlarray
matrix
Input, specified as a traced dlarray
matrix.
When the software evaluates a function with automatic differentiation enabled, the software traces the input dlarray
objects. These are some contexts where the software traces dlarray
objects:
- Inside loss functions that the
trainnet
function evaluates - Inside forward functions that custom layers evaluate
- Inside model and model loss functions that the
dlfeval
function evaluates
The sizes of the dimensions not specified by the dim argument must match.
x
— Input
traced dlarray
object
Input, specified as a traced dlarray
object.
When the software evaluates a function with automatic differentiation enabled, the software traces the input dlarray
objects. These are some contexts where the software traces dlarray
objects:
- Inside loss functions that the
trainnet
function evaluates - Inside forward functions that custom layers evaluate
- Inside model and model loss functions that the
dlfeval
function evaluates
The sizes of the dimensions not specified by the dim argument must match.
dim
— Operation dimension
positive integer
Operation dimension of u, specified as a positive integer.
The dljacobian
function treats the remaining dimensions of the data as independent batch dimensions.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
tf
— Flag to enable higher-order derivatives
true
or 1
(default) | false
or 0
Flag to enable higher-order derivatives, specified as one of the following:
true
— Enable higher-order derivatives. Trace the backward pass so that the returned gradients can be used in further computations for subsequent calls to functions that compute derivatives using automatic differentiation (for example,dlgradient
,dljacobian
,dldivergence
, anddllaplacian
).false
— Disable higher-order derivatives. Do not trace the backward pass. Use this option when you need to compute first-order derivatives only as this is usually quicker and requires less memory.
Output Arguments
jac
— Jacobian matrix
unformatted dlarray
object
Jacobian matrix, returned as an unformatted dlarray
object.
The layout of jac
depends on dim and the size of u.
If size(u,dim) == 1
, then jac
is a matrix, and:
- If
dim
is1
, thenjac(j,k)
corresponds to ∇u(j,k)=∂uk∂x(j,k) and uk corresponds tou(:,k)
. - If
dim
is2
, thenjac(k,j)
corresponds to (∇u)(k,j)=∂uk∂x(k,j) and uk corresponds tou(k,:)
.
Otherwise, if size(u,dim) > 1
, then jac
is a 3-D array, and:
- If
dim
is1
, thenjac(i,j,k)
corresponds to (∇u)(i,j,k)=∂u(k,i)∂x(j,k). - If
dim
is2
, thenjac(i,k,j)
corresponds to (∇u)(i,k,j)=∂u(k,i)∂x(k,j).
Version History
Introduced in R2024b