dldivergence - Divergence of deep learning data - MATLAB (original) (raw)

Divergence of deep learning data

Since R2024b

Syntax

Description

The divergence deep learning operation returns the mathematical divergence of neural network and model function outputs with respect to the specified input data and operation dimension.

[div](#mw%5F675a60dd-2ea7-4825-9f5e-70c308a528bf) = dldivergence([u](#mw%5Fdc3ca867-f130-49a8-ade3-5706ff456f38),[x](#mw%5F79053c99-f563-422e-90c7-d72ae5efdeba),[dim](#mw%5F3691b069-d954-4d47-9f17-f0cb9da96715%5Fsep%5Fmw%5F0b104bed-154d-4196-9863-3f92d131e34b)) returns the sum of the partial derivatives of neural network outputs u with respect to the data x for the specified operation dimension.

example

[div](#mw%5F675a60dd-2ea7-4825-9f5e-70c308a528bf) = dldivergence([u](#mw%5Fdc3ca867-f130-49a8-ade3-5706ff456f38),[x](#mw%5F79053c99-f563-422e-90c7-d72ae5efdeba),[dim](#mw%5F3691b069-d954-4d47-9f17-f0cb9da96715%5Fsep%5Fmw%5F0b104bed-154d-4196-9863-3f92d131e34b),EnableHigherDerivatives=[tf](#mw%5F3691b069-d954-4d47-9f17-f0cb9da96715%5Fsep%5Fmw%5F2cd7ab1d-ffc3-4f5f-863c-a43eacad66c3)) also specifies whether to enable higher derivatives by tracing the backward pass.

Examples

collapse all

Evaluate Divergence of Deep Learning Data

Create a neural network.

numChannels = 3;

layers = [ featureInputLayer(numChannels) fullyConnectedLayer(numChannels) tanhLayer];

net = dlnetwork(layers);

Load the training data. For the purposes of this example, generate some random data.

numObservations = 128;

X = rand(numChannels,numObservations); X = dlarray(X,"CB");

T = rand(numChannels,numObservations); T = dlarray(T,"CB");

Define a model loss function that takes the network and data as input and returns the loss, gradients of the loss with respect to the learnable parameters, and the divergence of the predictions with respect to the input data.

function [loss,gradients,div] = modelLoss(net,X,T)

Y = forward(net,X); loss = l1loss(Y,T);

X = stripdims(X); Y = stripdims(Y);

div = dldivergence(Y,X,1); gradients = dlgradient(loss,net.Learnables);

end

Evaluate the model loss function using the dlfeval function.

[loss,gradients,div] = dlfeval(@modelLoss,net,X,T);

View the size of the divergence.

Input Arguments

collapse all

u — Input

traced dlarray matrix

Input, specified as a traced dlarray matrix.

When the software evaluates a function with automatic differentiation enabled, the software traces the input dlarray objects. These are some contexts where the software traces dlarray objects:

The sizes of u and x must match.

x — Input

traced dlarray matrix

Input, specified as a traced dlarray matrix.

When the software evaluates a function with automatic differentiation enabled, the software traces the input dlarray objects. These are some contexts where the software traces dlarray objects:

The sizes of u and x must match.

dim — Operation dimension

positive integer

Operation dimension of u, specified as a positive integer.

The dldivergence function treats the remaining dimensions of the data as independent batch dimensions.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

tf — Flag to enable higher-order derivatives

true or 1 (default) | false or 0

Flag to enable higher-order derivatives, specified as one of the following:

Output Arguments

collapse all

div — Divergence

unformatted dlarray object

Divergence, returned as an unformatted 1-by-N dlarray object, where N is the size of the batch dimension of the data. The value of div(n) is div u(:,n)=∇·u(:,n)=∑i=1K∂u(i,n)∂x(i,n), where K is the size of the operation dimension of the data, i indexes into the operation dimension, and_n_ indexes into the batch dimension.

Version History

Introduced in R2024b