predict - Compute deep learning network output for inference - MATLAB (original) (raw)
Compute deep learning network output for inference
Syntax
Description
Some deep learning layers behave differently during training and inference (prediction). For example, during training, dropout layers randomly set input elements to zero to help prevent overfitting, but during inference, dropout layers do not change the input.
[Y](#mw%5Ff9eb5b7a-2e96-4b72-aad0-57dd4baa6839) = predict([net](#mw%5F4af36819-ecbb-46c6-9d6d-6f9397d5b8e1),[X](#mw%5F35ba553b-bd7e-4f4f-9130-ba785e9cfdb7))
returns the network output Y
during inference given the input dataX
and the network net
with a single input and a single output.
[Y](#mw%5Ff9eb5b7a-2e96-4b72-aad0-57dd4baa6839) = predict([net](#mw%5F4af36819-ecbb-46c6-9d6d-6f9397d5b8e1),X1,...,XM)
returns the network output Y
during inference given theM
inputs X1, ...,XM
and the networknet
that has M
inputs and a single output.
[Y1,...,YN] = predict(___)
returns theN
outputs Y1
, …, YN
during inference for networks that have N
outputs using any of the previous syntaxes.
[Y1,...,YK] = predict(___,Outputs=[layerNames](#function%5Fdlnetwork%5Fsep%5Fpredict%5Fsep%5Fmw%5F77c96b64-c636-40ed-84a5-4d321b2c1825))
returns the outputs Y1
, …, YK
during inference for the specified layers using any of the previous syntaxes.
[___] = predict(___,[Name=Value](#namevaluepairarguments))
specifies additional options using one or more name-value arguments.
[___,[state](#function%5Fdlnetwork%5Fsep%5Fpredict%5Fsep%5Fmw%5F4538a911-3a0a-4894-8a92-6f4bd2abbc9e)] = predict(___)
also returns the updated network state.
Examples
Classify Image Using Pretrained Neural Network
Load a pretrained SqueezeNet neural network into the workspace.
[net,classNames] = imagePretrainedNetwork;
Read an image from a PNG file and classify it. To classify the image, first convert it to the data type single
.
im = imread("peppers.png"); figure imshow(im)
X = single(im); scores = predict(net,X); [label,score] = scores2label(scores,classNames);
Display the image with the predicted label and corresponding score.
figure imshow(im) title(string(label) + " (Score: " + score + ")")
Input Arguments
net
— Network for custom training loops or custom pruning loops
dlnetwork
object | TaylorPrunableNetwork
object
This argument can represent either of these:
- Network for custom training loops, specified as a dlnetwork object.
- Network for custom pruning loops, specified as a TaylorPrunableNetwork object.
To prune a deep neural network, you require the Deep Learning Toolbox™ Model Quantization Library support package. This support package is a free add-on that you can download using the Add-On Explorer. Alternatively, see Deep Learning Toolbox Model Quantization Library.
X
— Input data
numeric array | dlarray
object
Input data, specified as one of these values:
- Numeric array (since R2023b)
dlarray
object (since R2023b)- Formatted
dlarray
object
Tip
Neural networks expect input data with a specific layout. For example, vector-sequence classification networks typically expect a vector-sequence representations to be_t_-by-c arrays, where t and_c_ are the number of time steps and channels of sequences, respectively. Neural networks typically have an input layer that specifies the expected layout of the data.
Most datastores and functions output data in the layout that the network expects. If your data is in a different layout to what the network expects, then indicate that your data has a different layout by using the InputDataFormats
option or by specifying input data as a formatted dlarray
object. It is usually easiest to adjust the InputDataFormats
training option than to preprocess the input data.
For neural networks that do not have input layers, you must use theInputDataFormats
option or use formatted dlarray
objects.
For more information, see Deep Learning Data Formats.
layerNames
— Layers to extract outputs from
string array | cell array of character vectors
Layers to extract outputs from, specified as a string array or a cell array of character vectors containing the layer names.
- If
layerNames(i)
corresponds to a layer with a single output, thenlayerNames(i)
is the name of the layer. - If
layerNames(i)
corresponds to a layer with multiple outputs, thenlayerNames(i)
is the layer name followed by the/
character and the name of the layer output:"layerName/outputName"
.
Name-Value Arguments
Specify optional pairs of arguments asName1=Value1,...,NameN=ValueN
, where Name
is the argument name and Value
is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.
Before R2021a, use commas to separate each name and value, and enclose Name
in quotes.
Example: Y = predict(net,X,InputDataFormats="CBT")
makes predictions with sequence data that has format "CBT"
(channel, batch, time).
InputDataFormats
— Description of input data dimensions
"auto"
(default) | string array | cell array of character vectors | character vector
Since R2023b
Description of the input data dimensions, specified as a string array, character vector, or cell array of character vectors.
If InputDataFormats
is "auto"
, then the software uses the formats expected by the network input. Otherwise, the software uses the specified formats for the corresponding network input.
A data format is a string of characters, where each character describes the type of the corresponding data dimension.
The characters are:
"S"
— Spatial"C"
— Channel"B"
— Batch"T"
— Time"U"
— Unspecified
For example, consider an array containing a batch of sequences where the first, second, and third dimensions correspond to channels, observations, and time steps, respectively. You can specify that this array has the format "CBT"
(channel, batch, time).
You can specify multiple dimensions labeled "S"
or "U"
. You can use the labels "C"
, "B"
, and"T"
once each, at most. The software ignores singleton trailing"U"
dimensions after the second dimension.
For a neural networks with multiple inputs net
, specify an array of input data formats, where InputDataFormats(i)
corresponds to the input net.InputNames(i)
.
For more information, see Deep Learning Data Formats.
Data Types: char
| string
| cell
OutputDataFormats
— Description of output data dimensions
"auto"
(default) | string array | cell array of character vectors | character vector
Since R2023b
Description of the output data dimensions, specified as one of these values:
"auto"
— If the output data has the same number of dimensions as the input data, then thepredict
function uses the format specified byInputDataFormats
. If the output data has a different number of dimensions to the input data, then thepredict
function automatically permutes the dimensions of the output data so that they are consistent with the network input layers, theInputDataFormats
option, or targets expected by thetrainnet
function.- Data formats, specified as a string array, character vector, or cell array of character vectors — The
predict
function uses the specified data formats.
A data format is a string of characters, where each character describes the type of the corresponding data dimension.
The characters are:
"S"
— Spatial"C"
— Channel"B"
— Batch"T"
— Time"U"
— Unspecified
For example, consider an array containing a batch of sequences where the first, second, and third dimensions correspond to channels, observations, and time steps, respectively. You can specify that this array has the format "CBT"
(channel, batch, time).
You can specify multiple dimensions labeled "S"
or "U"
. You can use the labels "C"
, "B"
, and"T"
once each, at most. The software ignores singleton trailing"U"
dimensions after the second dimension.
For more information, see Deep Learning Data Formats.
Data Types: char
| string
| cell
Acceleration
— Performance optimization
"auto"
(default) | "mex"
| "none"
Performance optimization, specified as one of these values:
"auto"
— Automatically apply a number of optimizations suitable for the input network and hardware resources."mex"
— Compile and execute a MEX function. This option is available only when using a GPU. You must store the input data or the network learnable parameters asgpuArray
objects. Using a GPU requires Parallel Computing Toolbox™ and a supported GPU device. For information on supported devices, see GPU Computing Requirements (Parallel Computing Toolbox). If Parallel Computing Toolbox or a suitable GPU is not available, then the software returns an error."none"
— Disable all acceleration.
When you use the "auto"
or "mex"
option, the software can offer performance benefits at the expense of an increased initial run time. Subsequent calls to the function are typically faster. Use performance optimization when you call the function multiple times using different input data.
When Acceleration
is "mex"
, the software generates and executes a MEX function based on the model and parameters you specify in the function call. A single model can have several associated MEX functions at one time. Clearing the model variable also clears any MEX functions associated with that model.
When Acceleration
is"auto"
, the software does not generate a MEX function.
The "mex"
option is available only when you use a GPU. You must have a C/C++ compiler installed and the GPU Coder™ Interface for Deep Learning support package. Install the support package using the Add-On Explorer in MATLAB®. For setup instructions, see MEX Setup (GPU Coder). GPU Coder is not required.
The "mex"
option has these limitations:
- The state output argument is not supported.
- Only
single
precision is supported. The input data or the network learnable parameters must have underlying typesingle
. - Networks with inputs that are not connected to an input layer are not supported.
- Traced dlarray objects are not supported. This means that the
"mex"
option is not supported inside a call todlfeval. - Not all layers are supported. For a list of supported layers, see Supported Layers (GPU Coder).
- MATLAB Compiler™ does not support deploying your network when using the
"mex"
option.
For quantized networks, the "mex"
option requires a CUDA® enabled NVIDIA® GPU with compute capability 6.1, 6.3, or higher.
Output Arguments
Y
— Output data
numeric array | dlarray
object
Output data, returned as a one of these values:
- Numeric array (since R2023b)
- Unformatted
dlarray
object (since R2023b) - Formatted
dlarray
object
The data type matches the data type of the input data.
state
— Updated network state
table
Updated network state, returned as a table.
The network state is a table with three columns:
Layer
– Layer name, specified as a string scalar.Parameter
– State parameter name, specified as a string scalar.Value
– Value of state parameter, specified as adlarray
object.
Layer states contain information calculated during the layer operation to be retained for use in subsequent forward passes of the layer. For example, the cell state and hidden state of LSTM layers, or running statistics in batch normalization layers.
For recurrent layers, such as LSTM layers, with the HasStateInputs
property set to 1
(true
), the state table does not contain entries for the states of that layer.
Update the state of a dlnetwork using the State property.
Algorithms
Reproducibility
To provide the best performance, deep learning using a GPU in MATLAB is not guaranteed to be deterministic. Depending on your network architecture, under some conditions you might get different results when using a GPU to train two identical networks or make two predictions using the same network and data. If you require determinism when performing deep learning operations using a GPU, use the deep.gpu.deterministicAlgorithms function (since R2024b).
If you use the rng function to set the same random number generator and seed, then predictions made using the CPU are reproducible.
Extended Capabilities
C/C++ Code Generation
Generate C and C++ code using MATLAB® Coder™.
Usage notes and limitations:
- C++ code generation supports the following syntaxes:
Y = predict(net,X)
Y = predict(net,X1,...,XM)
[Y1,...,YN] = predict(__)
[Y1,...,YK] = predict(__,'Outputs',layerNames)
- You can generate generic C/C++ code that does not depends on any third-party libraries for the syntax
[__,state] = predict(__)
- Code generation supports tuning the variable
Value
of theState
property. Code generation does not support modifying variablesLayer
andParameter
of theState
property. - Code generation supports these functions for the
State
property: - For Simulink simulation, code generation does not support extracting and updating the
State
of adlnetwork
in a MATLAB Function Block. Instead, use a Stateful Predict or a Stateful Classify block. - The input data
X
can only have a variable size on time ("T") dimension. Other data dimensions for the input dataX
must not have variable size. The size must be fixed at code generation time. - Code generation does not support passing complex-valued input to the
predict
method ofdlnetwork
object. - The
dlarray
input to thepredict
method must be asingle
datatype.
GPU Code Generation
Generate CUDA® code for NVIDIA® GPUs using GPU Coder™.
Usage notes and limitations:
- GPU code generation supports the following syntaxes:
Y = predict(net,X)
Y = predict(net,X1,...,XM)
[Y1,...,YN] = predict(__)
[Y1,...,YK] = predict(__,'Outputs',layerNames)
- You can generate plain CUDA code that is independent of deep learning libraries for the syntax
[__,state] = predict(__)
- Code generation supports tuning the variable
Value
of theState
property. Code generation does not support modifying variablesLayer
andParameter
of theState
property. - Code generation supports these functions for the
State
property: - For Simulink simulation, code generation does not support extracting and updating the
State
of adlnetwork
in a MATLAB Function Block. Instead, use a Stateful Predict or a Stateful Classify block. - The input data
X
can only have a variable size on time ("T") dimension. Other data dimensions for the input dataX
must not have variable size. The size must be fixed at code generation time. - Code generation for TensorRT library does not support marking an input layer as an output by using the
[Y1,...,YK] = predict(__,'Outputs',layerNames)
syntax. - Code generation does not support passing complex-valued input to the
predict
method ofdlnetwork
object. - The
dlarray
input to thepredict
method must be asingle
datatype.
GPU Arrays
Accelerate code by running on a graphics processing unit (GPU) using Parallel Computing Toolbox™.
The predict
function supports GPU array input with these usage notes and limitations:
- This function runs on the GPU if either or both of the following conditions are met:
- Any of the values of the network learnable parameters inside
net.Learnables.Value
aredlarray
objects with underlying data of typegpuArray
- The input argument
X
is adlarray
with underlying data of typegpuArray
- Any of the values of the network learnable parameters inside
For more information, see Run MATLAB Functions on a GPU (Parallel Computing Toolbox).
Version History
Introduced in R2019b
R2023b: Specify numeric arrays and unformatted dlarray
objects
Make predictions using numeric arrays and unformatted dlarray
objects.
Specify the input and output data formats using the InputDataFormats and OutputDataFormats options, respectively.
R2021a: predict
returns state values as dlarray
objects
For dlnetwork objects, the state output argument returned by the predict
function is a table containing the state parameter names and values for each layer in the network.
Starting in R2021a, the state values are dlarray objects. This change enables better support when using AcceleratedFunction objects. To accelerate deep learning functions that have frequently changing input values, for example, an input containing the network state, the frequently changing values must be specified as dlarray
objects.
In previous versions, the state values are numeric arrays.
In most cases, you will not need to update your code. If you have code that requires the state values to be numeric arrays, then to reproduce the previous behavior, extract the data from the state values manually using the extractdata function with the dlupdate function.
state = dlupdate(@extractdata,net.State);