minibatchpredict - Mini-batched neural network prediction - MATLAB (original) (raw)

Main Content

Mini-batched neural network prediction

Since R2024a

Syntax

Description

[[Y1,...,YM](#mw%5F08caf790-7982-4b6b-9d57-16229d31902c)] = minibatchpredict([net](#mw%5Fc7ae5ad7-5485-4270-8657-694ab264fb57%5Fsep%5Fmw%5F3ffb42b3-6af7-4c22-82b3-4fb0f6a399f2),[images](#mw%5Fb415c7c0-efa9-4077-888a-cb807315d017)) makes neural network predictions by looping over mini-batches of the specified images, whereM is the number of network outputs.

example

[[Y1,...,YM](#mw%5F08caf790-7982-4b6b-9d57-16229d31902c)] = minibatchpredict([net](#mw%5Fc7ae5ad7-5485-4270-8657-694ab264fb57%5Fsep%5Fmw%5F3ffb42b3-6af7-4c22-82b3-4fb0f6a399f2),[sequences](#mw%5Fd0a7ae26-3433-4960-badb-1336cd8c81aa)) makes neural network predictions by looping over mini-batches of the specified sequences.

[[Y1,...,YM](#mw%5F08caf790-7982-4b6b-9d57-16229d31902c)] = minibatchpredict([net](#mw%5Fc7ae5ad7-5485-4270-8657-694ab264fb57%5Fsep%5Fmw%5F3ffb42b3-6af7-4c22-82b3-4fb0f6a399f2),[features](#mw%5Ff5af1860-6a3f-4ae6-b9d2-22d602106b60)) makes neural network predictions by looping over mini-batches of the specified feature or tabular data.

[[Y1,...,YM](#mw%5F08caf790-7982-4b6b-9d57-16229d31902c)] = minibatchpredict([net](#mw%5Fc7ae5ad7-5485-4270-8657-694ab264fb57%5Fsep%5Fmw%5F3ffb42b3-6af7-4c22-82b3-4fb0f6a399f2),[data](#mw%5Fa2532343-aac5-47aa-81d4-f32987abe7a9)) makes neural network predictions by looping over mini-batches of other layouts or combinations of data.

[[Y1,...,YM](#mw%5F08caf790-7982-4b6b-9d57-16229d31902c)] = minibatchpredict([net](#mw%5Fc7ae5ad7-5485-4270-8657-694ab264fb57%5Fsep%5Fmw%5F3ffb42b3-6af7-4c22-82b3-4fb0f6a399f2),[X1,...,XN](#mw%5F7d327b88-3b20-47f9-b11b-ea2632d38241)) makes neural network predictions for a network with multiple inputs using the specified in-memory data.

[[Y1,...,YM](#mw%5F08caf790-7982-4b6b-9d57-16229d31902c)] = minibatchpredict(___,[Name=Value](#namevaluepairarguments)) specifies additional options using one or more name-value arguments. For example,MiniBatchSize=32 makes predictions by looping over mini-batches of size 32.

Examples

collapse all

Make Predictions Using dlnetwork Object

This example shows how to make predictions using a dlnetwork object by looping over mini-batches.

For large data sets, or when predicting on hardware with limited memory, make predictions by looping over mini-batches of the data using the minibatchpredict function.

Load dlnetwork Object

Load a trained dlnetwork object and the corresponding class names into the workspace. The neural network has one input and two outputs. It takes images of handwritten digits as input, and predicts the digit label and angle of rotation.

Load Data for Prediction

Load the digits test data for prediction.

View the class names.

classNames = 10x1 cell {'0'} {'1'} {'2'} {'3'} {'4'} {'5'} {'6'} {'7'} {'8'} {'9'}

View some of the images and the corresponding labels and angles of rotation.

numObservations = size(XTest,4); numPlots = 9; idx = randperm(numObservations,numPlots);

figure for i = 1:numPlots nexttile(i) I = XTest(:,:,:,idx(i)); label = labelsTest(idx(i)); imshow(I) title("Label: " + string(label) + newline + "Angle: " + anglesTest(idx(i))) end

Figure contains 9 axes objects. Hidden axes object 1 with title Label: 8 Angle: 5 contains an object of type image. Hidden axes object 2 with title Label: 9 Angle: -45 contains an object of type image. Hidden axes object 3 with title Label: 1 Angle: -11 contains an object of type image. Hidden axes object 4 with title Label: 9 Angle: -40 contains an object of type image. Hidden axes object 5 with title Label: 6 Angle: -42 contains an object of type image. Hidden axes object 6 with title Label: 0 Angle: -18 contains an object of type image. Hidden axes object 7 with title Label: 2 Angle: -9 contains an object of type image. Hidden axes object 8 with title Label: 5 Angle: -17 contains an object of type image. Hidden axes object 9 with title Label: 9 Angle: -27 contains an object of type image.

Make Predictions

Make predictions using the minibatchpredict function, and convert the classification scores to labels using the scores2label function. By default, the minibatchpredict function uses a GPU if one is available. Using a GPU requires a Parallel Computing Toolbox™ license and a supported GPU device. For information on supported devices, see GPU Computing Requirements (Parallel Computing Toolbox). Otherwise, the function uses the CPU. To select the execution environment manually, use the ExecutionEnvironment argument of the minibatchpredict function.

[scoresTest,Y2Test] = minibatchpredict(net,XTest); Y1Test = scores2label(scoresTest,classNames);

Visualize some of the predictions.

idx = randperm(numObservations,numPlots);

figure for i = 1:numPlots nexttile(i) I = XTest(:,:,:,idx(i)); label = Y1Test(idx(i)); imshow(I) title("Label: " + string(label) + newline + "Angle: " + Y2Test(idx(i))) end

Figure contains 9 axes objects. Hidden axes object 1 with title Label: 9 Angle: 20.3954 contains an object of type image. Hidden axes object 2 with title Label: 1 Angle: 3.7015 contains an object of type image. Hidden axes object 3 with title Label: 9 Angle: 23.5494 contains an object of type image. Hidden axes object 4 with title Label: 9 Angle: -36.4954 contains an object of type image. Hidden axes object 5 with title Label: 4 Angle: 16.428 contains an object of type image. Hidden axes object 6 with title Label: 7 Angle: 3.0644 contains an object of type image. Hidden axes object 7 with title Label: 1 Angle: 33.1356 contains an object of type image. Hidden axes object 8 with title Label: 4 Angle: 30.7531 contains an object of type image. Hidden axes object 9 with title Label: 9 Angle: 0.55887 contains an object of type image.

Input Arguments

collapse all

net — Neural network

dlnetwork object

Neural network, specified as a dlnetwork object.

images — Image data

numeric array | dlarray object | datastore | minibatchqueue object

Image data, specified as a numeric array, dlarray object, datastore, or minibatchqueue object.

Tip

For sequences of images, such as video data, use thesequences input argument, instead.

If you have data that fits in memory that does not require additional processing, then specifying the input data as a numeric array is usually the easiest option. If you want to make predictions with image files stored on your system, or want to apply additional processing, then datastores are usually the easiest option.

Tip

Neural networks expect input data with a specific layout. For example, image classification networks typically expect image representations to be_h_-by-_w_-by-c numeric arrays, where h, w, and c are the height, width, and number of channels of the images, respectively. Most neural networks have an input layer that specifies the expected layout of the data.

Most datastores and functions output data in the layout that the network expects. If your data is in a different layout than what the network expects, then indicate that your data has a different layout by using the InputDataFormats argument or by specifying input data as a formatted dlarray object. Adjusting the InputDataFormats argument is usually easier than preprocessing the input data.

For neural networks that do not have input layers, you must use theInputDataFormats argument or use formatteddlarray objects.

For more information, see Deep Learning Data Formats.

Numeric Array or dlarray Object

For data that fits in memory and does not require additional processing, you can specify a data set of images as a numeric array or a dlarray object.

The layouts of numeric arrays and unformatted dlarray objects depend on the type of image data, and must be consistent with theInputDataFormats argument.

Most networks expect image data in these layouts:

Data Layout
2-D images _h_-by-_w_-by-_c_-by-N array, where h, w, and c are the height, width, and number of channels of the images, respectively, and N is the number of images.Data in this layout has the data format "SSCB" (spatial, spatial, channel, batch).
3-D images _h_-by-_w_-by-_d_-by-_c_-by-N array, where h, w, d, and c are the height, width, depth, and number of channels of the images, respectively, and N is the number of images.Data in this layout has the data format "SSSCB" (spatial, spatial, spatial, channel, batch).

For data in a different layout, indicate that your data has a different layout by using the InputDataFormats argument or use a formatteddlarray object. For more information, see Deep Learning Data Formats.

Datastore

Datastores read batches of images and targets. Use datastores when you have data that does not fit in memory, or when you want to apply transformations to the data.

For image data, the minibatchpredict function supports these datastores:

Datastore Description Example Usage
ImageDatastore Datastore of images saved on disk. Make predictions with images saved on your system, where the images are the same size. When the images are different sizes, use anaugmentedImageDatastore object.
augmentedImageDatastore Datastore that applies random affine geometric transformations, including resizing. Make predictions with images saved on disk, where the images are different sizes.When you make predictions using an augmented image datastore, do not apply additional augmentations such as rotation, reflection, shear, and translation.
TransformedDatastore Datastore that transforms batches of data read from an underlying datastore using a custom transformation function. Transform datastores with outputs not supported by theminibatchpredict function.Apply custom transformations to datastore output.
CombinedDatastore Datastore that reads from two or more underlying datastores. Make predictions using a network with multiple inputs.
Custom mini-batch datastore Custom datastore that returns mini-batches of data. Make predictions using data in a layout that other datastores do not support.For details, see Develop Custom Mini-Batch Datastore.

Tip

ImageDatastore objects allow batch reading of JPG or PNG image files using prefetching. For efficient preprocessing of images for deep learning, including image resizing, use an augmentedImageDatastore object. Do not use the ReadFcn property of ImageDatastore objects. If you set the ReadFcn property to a custom function, then the ImageDatastore object does not prefetch image files and is usually significantly slower.

You can use other built-in datastores for testing deep learning neural networks by using the transform and combine functions. These functions can convert the data read from datastores to the layout required by the minibatchpredict function. The required layout of the datastore output depends on the neural network architecture. For more information, see Datastore Customization.

minibatchqueue Object

For greater control over how the software processes and transforms mini-batches, you can specify data as a minibatchqueue object.

If you specify data as a minibatchqueue object, then theminibatchpredict function ignores the MiniBatchSize property of the object and uses the MiniBatchSize argument instead. For minibatchqueue input, thePreprocessingEnvironment property must be"serial".

Note

This argument supports complex-valued predictors.

sequences — Sequence or time series data

cell array of numeric arrays | cell array of dlarray objects | numeric array | dlarray object | datastore | minibatchqueue object

Sequence or time series data, specified as a numeric array, cell array of numeric arrays, dlarray object, cell array of dlarray objects, datastore, or minibatchqueue object.

If you have sequences of the same length that fit in memory and do not require additional processing, then specifying the input data as a numeric array is usually the easiest option. If you have sequences of different lengths that fit in memory and do not require additional processing, then specifying the input data as a cell array of numeric arrays is usually the easiest option. If you want to train with sequences stored on your system, or want to apply additional processing such as custom transformations, then datastores are usually the easiest option.

Tip

Neural networks expect input data with a specific layout. For example, vector-sequence classification networks typically expect a vector-sequence representations to be_t_-by-c arrays, where t and_c_ are the number of time steps and channels of sequences, respectively. Neural networks typically have an input layer that specifies the expected layout of the data.

Most datastores and functions output data in the layout that the network expects. If your data is in a different layout than what the network expects, then indicate that your data has a different layout by using the InputDataFormats argument or by specifying input data as a formatted dlarray object. Adjusting the InputDataFormats argument is usually easier than preprocessing the input data.

For neural networks that do not have input layers, you must use theInputDataFormats argument or use formatteddlarray objects.

For more information, see Deep Learning Data Formats.

Numeric Array, dlarray Object, or Cell Array

For data that fits in memory and does not require additional processing like custom transformations, you can specify a single sequence as a numeric array ordlarray object or a data set of sequences as a cell array of numeric arrays, or dlarray objects.

For cell array input, the cell array must be an_N_-by-1 cell array of numeric arrays or dlarray objects, where N is the number of observations. The sizes and shapes of the numeric arrays or dlarray objects that represent the sequences depend on the type of sequence data, and must be consistent with theInputDataFormats argument value.

This table describes the expected layout of data for a neural network with a sequence input layer.

Data Layout
Vector sequences _s_-by-c matrices, where s and c are the numbers of time steps and channels (features) of the sequences, respectively.
1-D image sequences _h_-by-_c_-by-s arrays, where h and c correspond to the height and number of channels of the images, respectively, and s is the sequence length.
2-D image sequences _h_-by-_w_-by-_c_-by-s arrays, where h, w, and c correspond to the height, width, and number of channels of the images, respectively, and s is the sequence length.
3-D image sequences _h_-by-_w_-by-_d_-by-_c_-by-s, where h, w, d, and c correspond to the height, width, depth, and number of channels of the 3-D images, respectively, and s is the sequence length.

For data in a different layout, indicate that your data has a different layout by using the InputDataFormats argument or use a formatteddlarray object. For more information, see Deep Learning Data Formats.

Datastore

Datastores read batches of sequences and targets. Use datastores when you have data that does not fit in memory, or when you want to apply transformations to the data.

For sequence and time-series data, the minibatchpredict function supports these datastores:

Datastore Description Example Usage
TransformedDatastore Datastore that transforms batches of data read from an underlying datastore using a custom transformation function. Transform datastores with outputs not supported by theminibatchpredict function.Apply custom transformations to datastore output.
CombinedDatastore Datastore that reads from two or more underlying datastores. Make predictions using a network with multiple inputs.
Custom mini-batch datastore Custom datastore that returns mini-batches of data. Train neural network using data in a layout that other datastores do not support.For details, see Develop Custom Mini-Batch Datastore.

You can use other built-in datastores by using the transform andcombine functions. These functions can convert the data read from datastores to the layout required by the minibatchpredict function. For example, you can transform and combine data read from in-memory arrays and CSV files using ArrayDatastore andTabularTextDatastore objects, respectively. The required layout of the datastore output depends on the neural network architecture. For more information, see Datastore Customization.

minibatchqueue Object

For greater control over how the software processes and transforms mini-batches, you can specify data as a minibatchqueue object.

If you specify data as a minibatchqueue object, then theminibatchpredict function ignores the MiniBatchSize property of the object and uses the MiniBatchSize argument instead. For minibatchqueue input, thePreprocessingEnvironment property must be"serial".

Note

This argument supports complex-valued predictors.

features — Feature or tabular data

numeric array | dlarray object | table | datastore | minibatchqueue object

Feature or tabular data, specified as a numeric array, datastore, table, orminibatchqueue object.

If you have data that fits in memory that does not require additional processing, then specifying the input data as a numeric array or table is usually the easiest option. If you want to train with feature or tabular data stored on your system, or want to apply additional processing such as custom transformations, then datastores are usually the easiest option.

Tip

Neural networks expect input data with a specific layout. For example feature classification networks typically expect feature and tabular data representations to be 1-by-c vectors, where c is the number features of the data. Neural networks typically have an input layer that specifies the expected layout of the data.

Most datastores and functions output data in the layout that the network expects. If your data is in a different layout than what the network expects, then indicate that your data has a different layout by using the InputDataFormats argument or by specifying input data as a formatted dlarray object. Adjusting the InputDataFormats argument is usually easier than preprocessing the input data.

For neural networks that do not have input layers, you must use theInputDataFormats argument or use formatteddlarray objects.

For more information, see Deep Learning Data Formats.

Numeric Array or dlarray Objects

For feature data that fits in memory and does not require additional processing like custom transformations, you can specify feature data as a numeric array ordlarray object.

The layouts of numeric arrays and unformatted dlarray objects must be consistent with the InputDataFormats argument. Most networks with feature input expect input data specified as annumObservations-by-numFeatures array, wherenumObservations is the number of observations andnumFeatures is the number of features of the input data.

Table

For feature data that fits in memory and does not require additional processing like custom transformations, you can specify feature data as a table.

To specify feature data as a table, specify a table withnumObservations rows and numFeatures+1 columns, where numObservations and numFeatures are the number of observations and channels of the input data, respectively. Theminibatchpredict function uses the firstnumFeatures columns as the input features and uses the last column as the targets.

Datastore

Datastores read batches of feature data and targets. Use datastores when you have data that does not fit in memory, or when you want to apply transformations to the data.

For feature and tabular data, the minibatchpredict function supports these datastores:

Data Type Description Example Usage
TransformedDatastore Datastore that transforms batches of data read from an underlying datastore using a custom transformation function. Make predictions using a neural network with multiple inputs.Transform datastores with outputs not supported by thetrainnet function.Apply custom transformations to datastore output.
CombinedDatastore Datastore that reads from two or more underlying datastores. Make predictions using a neural network with multiple inputs.
Custom mini-batch datastore Custom datastore that returns mini-batches of data. Make predictions using data in a layout that other datastores do not support.For details, see Develop Custom Mini-Batch Datastore.

You can use other built-in datastores for making predictions by using the transform andcombine functions. These functions can convert the data read from datastores to the table or cell array format required by the minibatchpredict function. For more information, see Datastore Customization.

minibatchqueue Object

For greater control over how the software processes and transforms mini-batches, you can specify data as a minibatchqueue object.

If you specify data as a minibatchqueue object, then theminibatchpredict function ignores the MiniBatchSize property of the object and uses the MiniBatchSize argument instead. For minibatchqueue input, thePreprocessingEnvironment property must be"serial".

Note

This argument supports complex-valued predictors.

data — Generic data or combinations of data types

numeric array | dlarray object | datastore | minibatchqueue object

Generic data or combinations of data types, specified as a numeric array,dlarray object, datastore, or minibatchqueue object.

If you have data that fits in memory and does not require additional processing, then specifying the input data as a numeric array is usually easiest option. If you want to train with data stored on your system, or you want to apply additional processing, then using datastores it is usually easiest option.

Tip

Neural networks expect input data with a specific layout. For example, vector-sequence classification networks typically expect a vector-sequence representations to be_t_-by-c arrays, where t and_c_ are the number of time steps and channels of sequences, respectively. Neural networks typically have an input layer that specifies the expected layout of the data.

Most datastores and functions output data in the layout that the network expects. If your data is in a different layout than what the network expects, then indicate that your data has a different layout by using the InputDataFormats argument or by specifying input data as a formatted dlarray object. Adjusting the InputDataFormats argument is usually easier than preprocessing the input data.

For neural networks that do not have input layers, you must use theInputDataFormats argument or use formatteddlarray objects.

For more information, see Deep Learning Data Formats.

Numeric or dlarray Objects

For data that fits in memory and does not require additional processing like custom transformations, you can specify feature data as a numeric array.

For a neural network with an inputLayer object, the expected layout of input data is a given by the InputFormat property of the layer.

For data in a different layout, indicate that your data has a different layout by using the InputDataFormats argument or use a formatteddlarray object. For more information, see Deep Learning Data Formats.

Datastores

Datastores read batches of data and targets. Use datastores when you have data that does not fit in memory, or when you want to apply transformations to the data.

Generic data or combinations of data types, theminibatchpredict function supports these datastores:

Data Type Description Example Usage
TransformedDatastore Datastore that transforms batches of data read from an underlying datastore using a custom transformation function. Make predictions using a neural network with multiple inputs.Transform outputs of datastores not supported byminibatchpredict to the have the required format.Apply custom transformations to datastore output.
CombinedDatastore Datastore that reads from two or more underlying datastores. Make predictions using a neural network with multiple inputs.
Custom mini-batch datastore Custom datastore that returns mini-batches of data. Make predictions using data in a format that other datastores do not support.For details, see Develop Custom Mini-Batch Datastore.

You can use other built-in datastores by using the transform andcombine functions. These functions can convert the data read from datastores to the table or cell array format required by minibatchpredict. For more information, see Datastore Customization.

minibatchqueue Object

For greater control over how the software processes and transforms mini-batches, you can specify data as a minibatchqueue object.

If you specify data as a minibatchqueue object, then theminibatchpredict function ignores the MiniBatchSize property of the object and uses the MiniBatchSize argument instead. For minibatchqueue input, thePreprocessingEnvironment property must be"serial".

Note

This argument supports complex-valued predictors.

X1,...,XN — In-memory data for neural network with multiple inputs

numeric array | dlarray object | cell array

In-memory data for a neural network with multiple inputs, specified as numeric arrays, dlarray objects, or cell arrays.

For a neural network with multiple inputs, if you have data that fits in memory and does not require additional processing, then specifying the input data as in-memory arrays is usually the easiest option. If you want to make predictions with data stored on your system, or you want to apply additional processing, then using datastores is usually the easiest option.

Tip

Neural networks expect input data with a specific layout. For example, vector-sequence classification networks typically expect a vector-sequence representations to be_t_-by-c arrays, where t and_c_ are the number of time steps and channels of sequences, respectively. Neural networks typically have an input layer that specifies the expected layout of the data.

Most datastores and functions output data in the layout that the network expects. If your data is in a different layout than what the network expects, then indicate that your data has a different layout by using the InputDataFormats argument or by specifying input data as a formatted dlarray object. Adjusting the InputDataFormats argument is usually easier than preprocessing the input data.

For neural networks that do not have input layers, you must use theInputDataFormats argument or use formatteddlarray objects.

For more information, see Deep Learning Data Formats.

For each input X1,...,XN, where N is the number of inputs, specify the data as a numeric array, dlarray object, or cell array as described by the argument images,sequences, features, ordata that matches the type of data. The inputXi corresponds to the network inputnet.InputNames(i).

Note

This argument supports complex-valued predictors.

Name-Value Arguments

Specify optional pairs of arguments asName1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Example: minibatchpredict(net,images,MiniBatchSize=32) makes predictions by looping over images using mini-batches of size 32.

MiniBatchSize — Size of mini-batches

128 (default) | positive integer

Size of mini-batches to use for prediction, specified as a positive integer. Larger mini-batch sizes require more memory, but can lead to faster predictions.

When you make predictions with sequences of different lengths, the mini-batch size can impact the amount of padding added to the input data, which can result in different predicted values. Try using different values to see which works best with your network. To specify padding options, use the SequenceLength arguments.

Note

If you specify the input data as a minibatchqueue object, then the minibatchpredict function uses the mini-batch size specified by this argument and not the MiniBatchSize property of theminibatchqueue object.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Outputs — Layers to extract outputs from

string array | cell array of character vectors

Layers to extract outputs from, specified as a string array or a cell array of character vectors containing the layer names.

The default value is net.OutputNames.

Acceleration — Performance optimization

"auto" (default) | "mex" | "none"

Performance optimization, specified as one of these values:

When you use the "auto" or "mex" option, the software can offer performance benefits at the expense of an increased initial run time. Subsequent calls to the function are typically faster. Use performance optimization when you call the function multiple times using different input data.

When Acceleration is "mex", the software generates and executes a MEX function based on the model and parameters you specify in the function call. A single model can have several associated MEX functions at one time. Clearing the model variable also clears any MEX functions associated with that model.

When Acceleration is"auto", the software does not generate a MEX function.

The "mex" option is available only when you use a GPU. You must have a C/C++ compiler installed and the GPU Coder™ Interface for Deep Learning support package. Install the support package using the Add-On Explorer in MATLAB®. For setup instructions, see MEX Setup (GPU Coder). GPU Coder is not required.

The "mex" option has these limitations:

For quantized networks, the "mex" option requires a CUDA® enabled NVIDIA® GPU with compute capability 6.1, 6.3, or higher.

ExecutionEnvironment — Hardware resource

"auto" (default) | "gpu" | "cpu"

Hardware resource, specified as one of these values:

SequenceLength — Option to pad or truncate input sequences

"longest" (default) | "shortest"

Option to pad or truncate the input sequences, specified as one of these options:

To learn more about the effects of padding and truncating the input sequences, seeSequence Padding and Truncation.

SequencePaddingDirection — Direction of padding or truncation

"right" (default) | "left"

Direction of padding or truncation, specified as one of these options:

Because recurrent layers process sequence data one time step at a time, when the recurrent layer OutputMode property is "last", any padding in the final time steps can negatively influence the layer output. To pad or truncate sequence data on the left, set the SequencePaddingDirection argument to "left".

For sequence-to-sequence neural networks (when the OutputMode property is"sequence" for each recurrent layer), any padding in the first time steps can negatively influence the predictions for the earlier time steps. To pad or truncate sequence data on the right, set the SequencePaddingDirection option to "right".

To learn more about the effects of padding and truncating sequences, see Sequence Padding and Truncation.

SequencePaddingValue — Value by which to pad input sequences

0 (default) | scalar

Value by which to pad the input sequences, specified as a scalar.

Do not pad sequences with NaN, because doing so can propagate errors through the neural network.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

InputDataFormats — Description of input data dimensions

"auto" (default) | string array | cell array of character vectors | character vector

Description of the input data dimensions, specified as a string array, character vector, or cell array of character vectors.

If the InputDataFormats argument value is "auto", then the software uses the formats expected by the network input. Otherwise, the software uses the specified formats for the corresponding network input.

A data format is a string of characters, where each character describes the type of the corresponding data dimension.

The characters are:

For example, consider an array containing a batch of sequences where the first, second, and third dimensions correspond to channels, observations, and time steps, respectively. You can specify that this array has the format "CBT" (channel, batch, time).

You can specify multiple dimensions labeled "S" or "U". You can use the labels "C", "B", and"T" once each, at most. The software ignores singleton trailing"U" dimensions after the second dimension.

For a neural network with multiple inputs, net, specify an array of input data formats, where InputDataFormats(i) corresponds to the input net.InputNames(i).

For more information, see Deep Learning Data Formats.

Data Types: char | string | cell

OutputDataFormats — Description of output data dimensions

"auto" (default) | string array | cell array of character vectors | character vector

Description of the output data dimensions, specified as one of these values:

A data format is a string of characters, where each character describes the type of the corresponding data dimension.

The characters are:

For example, consider an array containing a batch of sequences where the first, second, and third dimensions correspond to channels, observations, and time steps, respectively. You can specify that this array has the format "CBT" (channel, batch, time).

You can specify multiple dimensions labeled "S" or "U". You can use the labels "C", "B", and"T" once each, at most. The software ignores singleton trailing"U" dimensions after the second dimension.

For more information, see Deep Learning Data Formats.

Data Types: char | string | cell

UniformOutput — Flag to return padded data as uniform array

true or 1 (default) | false or 0

Flag to return padded data as a uniform array, specified as a logical1 (true) or 0 (false). When you set the value to 0, the software outputs a cell array of predictions.

Output Arguments

collapse all

Y1,...,YM — Neural network predictions

numeric array | dlarray object | cell array

Neural network predictions, returned as numeric arrays, dlarray objects, or cell arrays Y1,...,YM, where M is the number of network outputs.

The predictions Yi correspond to the outputOutputs(i).

More About

collapse all

Floating-Point Arithmetic

The minibatchpredict function converts integer numeric array and datastore inputs to single precision floating point values. Forminibatchqueue inputs, the software uses the data type specified by theOutputCast property of that input.

When you use prediction or validation functions with a dlnetwork object with single-precision learnable and state parameters, the software performs the computations using single-precision, floating-point arithmetic.

When you use prediction or validation functions with a dlnetwork object with double-precision learnable and state parameters:

Reproducibility

To provide the best performance, deep learning using a GPU in MATLAB is not guaranteed to be deterministic. Depending on your network architecture, under some conditions you might get different results when using a GPU to train two identical networks or make two predictions using the same network and data. If you require determinism when performing deep learning operations using a GPU, use the deep.gpu.deterministicAlgorithms function (since R2024b).

If you use the rng function to set the same random number generator and seed, then predictions made using the CPU are reproducible.

Extended Capabilities

GPU Arrays

Accelerate code by running on a graphics processing unit (GPU) using Parallel Computing Toolbox™.

This function fully supports GPU acceleration.

By default, the minibatchpredict function uses a GPU if one is available. You can specify the hardware that the minibatchpredict function uses by specifying the ExecutionEnvironment name-value argument.

For more information, see Run MATLAB Functions on a GPU (Parallel Computing Toolbox).

Version History

Introduced in R2024a