testnet - Test deep learning neural network - MATLAB (original) (raw)

Test deep learning neural network

Since R2024b

Syntax

Description

[metricValues](#mw%5F9a1b590e-32cb-4e63-b117-b8f2d708e163) = testnet([net](#mw%5F3eb84788-36da-4e3e-a351-ecbc55b86c2b%5Fsep%5Fmw%5F3ffb42b3-6af7-4c22-82b3-4fb0f6a399f2),[images](#mw%5F72fbe18b-6a89-4e3c-91ce-bda622574d4b),[metrics](#mw%5F34e5d145-60f7-48d7-9890-7cc9ef426b95)) tests the neural network net by evaluating it with the image data and targets specified by images and the metrics specified bymetrics.

example

[metricValues](#mw%5F9a1b590e-32cb-4e63-b117-b8f2d708e163) = testnet([net](#mw%5F3eb84788-36da-4e3e-a351-ecbc55b86c2b%5Fsep%5Fmw%5F3ffb42b3-6af7-4c22-82b3-4fb0f6a399f2),[images](#mw%5F72fbe18b-6a89-4e3c-91ce-bda622574d4b),[targets](#mw%5Faec527a4-1b3a-4006-9100-b1194fc46aac),[metrics](#mw%5F34e5d145-60f7-48d7-9890-7cc9ef426b95)) tests the neural network net with the image data specified byimages, the targets specified by targets, and the metrics specified by metrics.

[metricValues](#mw%5F9a1b590e-32cb-4e63-b117-b8f2d708e163) = testnet([net](#mw%5F3eb84788-36da-4e3e-a351-ecbc55b86c2b%5Fsep%5Fmw%5F3ffb42b3-6af7-4c22-82b3-4fb0f6a399f2),[sequences](#mw%5F3067c8a1-68a7-4d2b-a781-373dbb10b0a5),[metrics](#mw%5F34e5d145-60f7-48d7-9890-7cc9ef426b95)) tests the neural network net with the sequence data and targets specified by sequences, and the metrics specified bymetrics.

[metricValues](#mw%5F9a1b590e-32cb-4e63-b117-b8f2d708e163) = testnet([net](#mw%5F3eb84788-36da-4e3e-a351-ecbc55b86c2b%5Fsep%5Fmw%5F3ffb42b3-6af7-4c22-82b3-4fb0f6a399f2),[sequences](#mw%5F3067c8a1-68a7-4d2b-a781-373dbb10b0a5),[targets](#mw%5Faec527a4-1b3a-4006-9100-b1194fc46aac),[metrics](#mw%5F34e5d145-60f7-48d7-9890-7cc9ef426b95)) tests the neural network net with the sequence data specified bysequences, the targets specified by targets, and the metrics specified by metrics.

[metricValues](#mw%5F9a1b590e-32cb-4e63-b117-b8f2d708e163) = testnet([net](#mw%5F3eb84788-36da-4e3e-a351-ecbc55b86c2b%5Fsep%5Fmw%5F3ffb42b3-6af7-4c22-82b3-4fb0f6a399f2),[features](#mw%5F7a557b23-de67-4ea2-95a1-95d0472cb9d4),[metrics](#mw%5F34e5d145-60f7-48d7-9890-7cc9ef426b95)) tests the neural network net with the feature data and targets specified by features, and the metrics specified bymetrics.

[metricValues](#mw%5F9a1b590e-32cb-4e63-b117-b8f2d708e163) = testnet([net](#mw%5F3eb84788-36da-4e3e-a351-ecbc55b86c2b%5Fsep%5Fmw%5F3ffb42b3-6af7-4c22-82b3-4fb0f6a399f2),[features](#mw%5F7a557b23-de67-4ea2-95a1-95d0472cb9d4),[targets](#mw%5Faec527a4-1b3a-4006-9100-b1194fc46aac),[metrics](#mw%5F34e5d145-60f7-48d7-9890-7cc9ef426b95)) tests the neural network net with the feature data specified byfeatures, the targets specified by targets, and the metrics specified by metrics.

[metricValues](#mw%5F9a1b590e-32cb-4e63-b117-b8f2d708e163) = testnet([net](#mw%5F3eb84788-36da-4e3e-a351-ecbc55b86c2b%5Fsep%5Fmw%5F3ffb42b3-6af7-4c22-82b3-4fb0f6a399f2),[data](#mw%5Faa8476e7-940f-44b2-ad44-0184503fcc5d),[metrics](#mw%5F34e5d145-60f7-48d7-9890-7cc9ef426b95)) tests the neural network with other data layouts or combinations of different types of data.

[metricValues](#mw%5F9a1b590e-32cb-4e63-b117-b8f2d708e163) = testnet([net](#mw%5F3eb84788-36da-4e3e-a351-ecbc55b86c2b%5Fsep%5Fmw%5F3ffb42b3-6af7-4c22-82b3-4fb0f6a399f2),[data](#mw%5Faa8476e7-940f-44b2-ad44-0184503fcc5d),[targets](#mw%5Faec527a4-1b3a-4006-9100-b1194fc46aac),[metrics](#mw%5F34e5d145-60f7-48d7-9890-7cc9ef426b95)) tests the neural network with the predictors specified by data and the targets specified by targets.

[metricValues](#mw%5F9a1b590e-32cb-4e63-b117-b8f2d708e163) = testnet(___,[Name=Value](#namevaluepairarguments)) specifies options using one or more name-value arguments in addition to any of the input argument combinations in previous syntaxes. For example,InputDataFormats="CB" specifies that the first and second dimension of the input data correspond to the channel and batch dimensions, respectively.

Examples

collapse all

Load a trained dlnetwork object. The MAT file digitsNet.mat contains an image classification neural network specified by the variable net.

Load the test data. The MAT file DigitsDataTest.mat contains the test images and labels specified by the variables XTest and labelsTest, respectively.

Test the neural network using the testnet function. For single-label classification, evaluate the accuracy. The accuracy is the percentage of correct predictions.

accuracy = testnet(net,XTest,labelsTest,"accuracy")

Input Arguments

collapse all

Neural network, specified as a dlnetwork object.

Image data, specified as a numeric array, dlarray object, datastore, or minibatchqueue object. For sequences of images, such as video data, use the sequences input argument instead.

If you have data that fits in memory and does not require additional processing, then specifying the input data as a numeric array is usually the best option. To test a neural network with image files stored on your system, or to apply additional processing, then use a datastore. For neural networks with multiple inputs or multiple outputs, you must use a TransformedDatastore,CombinedDatastore, orminibatchqueue object.

Tip

Neural networks expect input data with a specific layout. For example, image classification networks typically expect image representations to be_h_-by-_w_-by-c numeric arrays, where h, w, and c are the height, width, and number of channels of the images, respectively. Neural networks typically have an input layer that specifies the expected layout of the data.

Most datastores and functions return data in the layout that the network expects. If your data is in a different layout, then indicate the layout by using the InputDataFormats name-value argument or by specifying the input data as a formatted dlarray object. Specifying theInputDataFormats name-value argument is usually easier than adjusting the layout of the input data manually.

For neural networks that do not have input layers, you must use theInputDataFormats name-value argument or formatteddlarray objects.

For more information, see Deep Learning Data Formats.

Numeric Array or dlarray Object

For data that fits in memory and does not require additional processing, you can specify a data set of images as a numeric array or a dlarray object. When you do, you must also specify the targets argument.

The layout of numeric arrays and unformatted dlarray objects depends on the type of image data, and must be consistent with the InputDataFormats name-value argument.

Most networks expect image data in these layouts.

Data Layout
2-D images _h_-by-_w_-by-c_-by-N array, where h, w, and_c are the height, width, and number of channels of the images, respectively, and N is the number of images.Data in this layout has the data format "SSCB" (spatial, spatial, channel, batch).
3-D images _h_-by-_w_-by-_d_-by-_c_-by-N array, where h, w,d, and c are the height, width, depth, and number of channels of the images, respectively, and N is the number of images.Data in this layout has the data format "SSSCB" (spatial, spatial, spatial, channel, batch).

For data in a different layout, indicate that your data has a different layout by using the InputDataFormats argument or use a formatteddlarray object instead. For more information, see Deep Learning Data Formats.

Categorical Array (since R2025a)

For images of categorical values (such as labeled pixel maps) that fit in memory and does not require additional processing, you can specify the images as categorical arrays.

If you specify images as a categorical array, then you must also specify thetargets argument.

The software automatically converts categorical inputs to numeric values and passes them to the neural network. To specify how the software converts categorical inputs to numeric values, use the CategoricalInputEncoding argument. The layout of categorical arrays depend on the type of image data and must be consistent with the InputDataFormats.

Most networks expect categorical image data passed to thetestnet function in the layouts in this table.

Data Layout
2-D categorical images _h_-by-_w_-by-1-by-N array, where h and w are the height and width of the images, respectively, and N is the number of images.After the software converts this data to numeric arrays, data in this layout has the data format"SSCB" (spatial, spatial, channel, batch). The size of the "C" (channel) dimension depends on the CategoricalInputEncoding argument.
3-D categorical images _h_-by-_w_-by-d_-by-1-by-N array, where h, w, and_d are the height, width, and depth of the images, respectively, and N is the number of images.Data in this layout has the data format"SSSCB" (spatial, spatial, spatial, channel, batch). The size of the "C" (channel) dimension depends on theCategoricalInputEncoding argument.

For data in a different layout, indicate that your data has a different layout by using the InputDataFormats argument or use a formatted dlarray object instead. For more information, see Deep Learning Data Formats.

Datastore

A datastore reads batches of images and targets. Use a datastore when you have data that does not fit in memory or when you want to apply transformations to the data.

For image data, the testnet function supports these datastores.

Datastore Description Example Usage
ImageDatastore Datastore of images saved on disk. Test with images saved on your system, where the images are the same size. When the images are different sizes, use an augmentedImageDatastore object.
augmentedImageDatastore Datastore that applies random affine geometric transformations, including resizing. Test with images saved on disk, where the images are different sizes.When you test using an augmented image datastore, do not apply additional augmentations such as rotation, reflection, shear, and translation.
TransformedDatastore Datastore that transforms batches of data read from an underlying datastore using a custom transformation function. Transform datastores with outputs not supported by thetestnet function, or apply custom transformations to datastore output.
CombinedDatastore Datastore that reads from two or more underlying datastores. Test using a network with multiple inputs.
Custom mini-batch datastore Custom datastore that returns mini-batches of data. Test using data in a layout that other datastores do not support.For details, see Develop Custom Mini-Batch Datastore.

To specify the targets, the datastore must return cell arrays or tables withnumInputs+numOutputs columns, wherenumInputs and numOutputs are the number of network inputs and outputs, respectively. The firstnumInputs columns correspond to the network inputs. The last numOutput columns correspond to the network outputs. TheInputNames and OutputNames properties of the neural network specify the order of the input and output data, respectively.

Tip

ImageDatastore objects allow batch reading of JPG or PNG image files using prefetching. For efficient preprocessing of images for deep learning, including image resizing, use an augmentedImageDatastore object. Do not use the ReadFcn property of ImageDatastore objects. If you set the ReadFcn property to a custom function, then the ImageDatastore object does not prefetch image files and is usually significantly slower.

You can use other built-in datastores for testing deep learning neural networks by using the transform and combine functions. These functions can convert the data read from datastores to the layout required by the testnet function. The required layout of the datastore output depends on the neural network architecture. For more information, see Datastore Customization.

minibatchqueue Object

For greater control over how the software processes and transforms mini-batches, you can specify data as a minibatchqueue object. When you do, the testnet function ignores theMiniBatchSize property of the object and uses the MiniBatchSize name-value argument instead. For minibatchqueue input, thePreprocessingEnvironment property must be"serial".

Note

This argument supports complex-valued predictors and targets.

Sequence or time series data, specified as a numeric array, categorical array, cell array, datastore, or minibatchqueue object.

If you have sequences of the same length that fit in memory and do not require additional processing, then specifying the input data as a numeric array is usually the best option. If you have sequences of different lengths that fit in memory and do not require additional processing, then specifying the input data as a cell array of numeric arrays is usually the best option. To test a neural network with sequences stored on your system, or to apply additional processing such as custom transformations, then use a datastore. For neural networks with multiple inputs, you must use a TransformedDatastore or CombinedDatastore object.

Tip

Neural networks expect input data with a specific layout. For example, vector-sequence classification networks typically expect vector-sequence representations to be_t_-by-c arrays, where t and_c_ are the number of time steps and channels of sequences, respectively. Neural networks typically have an input layer that specifies the expected layout of the data.

Most datastores and functions return data in the layout that the network expects. If your data is in a different layout, then indicate the layout by using the InputDataFormats name-value argument or by specifying the input data as a formatted dlarray object. Specifying theInputDataFormats name-value argument is usually easier than adjusting the layout of the input data manually.

For neural networks that do not have input layers, you must use theInputDataFormats name-value argument or formatteddlarray objects.

For more information, see Deep Learning Data Formats.

Numeric Array, Categorical Array, dlarray Object, or Cell Array

For data that fits in memory and does not require additional processing like custom transformations, you can specify a single sequence as a numeric array, categorical array, or a dlarray object, or a data set of sequences as a cell array of numeric arrays, categorical arrays, or dlarray objects. If you specify sequences as a numeric array, categorical array, cell array, or a dlarray object, then you must also specify thetargets argument.

For cell array input, the cell array must be an_N_-by-1 cell array of numeric arrays, categorical arrays, ordlarray objects, where N is the number of observations.

The software automatically converts categorical inputs to numeric values and passes them to the neural network. To specify how the software converts categorical inputs to numeric values, use the CategoricalInputEncoding argument.

The size and shape of the numeric arrays, categorical arrays, ordlarray objects that represent sequences depend on the type of sequence data and must be consistent with the InputDataFormats argument.

Most networks with a sequence input layer expect sequence data passed to thetestnet function in the layouts in this table.

Data Layout
Vector sequences s_-by-c matrices, where_s and c are the numbers of time steps and channels (features) of the sequences, respectively.
Categorical vector sequences s_-by-1 categorical arrays, where_s is the number of time steps of the sequences.
1-D image sequences _h_-by-c_-by-s arrays, where h and c correspond to the height and number of channels of the images, respectively, and_s is the sequence length.
Categorical 1-D image sequences h_-by-1-by-s categorical arrays, where h corresponds to the height of the images and_s is the sequence length.
2-D image sequences _h_-by-_w_-by-c_-by-s arrays, where h, w, and_c correspond to the height, width, and number of channels of the images, respectively, and s is the sequence length.
Categorical 2-D image sequences _h_-by-w_-by-1-by-s arrays, where h and w correspond to the height and width of the images, respectively, and_s is the sequence length.
3-D image sequences _h_-by-_w_-by-_d_-by-c_-by-s, where h, w, d, and c correspond to the height, width, depth, and number of channels of the 3-D images, respectively, and_s is the sequence length.
Categorical 3-D image sequences _h_-by-_w_-by-d_-by-1-by-s, where h, w, and_d correspond to the height, width, and depth of the 3-D images, respectively, and s is the sequence length.

For data in a different layout, indicate that your data has a different layout by using the InputDataFormats argument or use a formatteddlarray object instead. For more information, see Deep Learning Data Formats.

Datastore

A datastore reads batches of sequences and targets. Use a datastore when you have data that does not fit in memory or when you want to apply transformations to the data.

For sequence and time-series data, the testnet function supports these datastores.

Datastore Description Example Usage
TransformedDatastore Datastore that transforms batches of data read from an underlying datastore using a custom transformation function. Transform datastores with outputs not supported by thetestnet function.Apply custom transformations to datastore output.
CombinedDatastore Datastore that reads from two or more underlying datastores. Combine predictors and targets from different data sources.
Custom mini-batch datastore Custom datastore that returns mini-batches of data. Test neural network using data in a layout that other datastores do not support.For details, see Develop Custom Mini-Batch Datastore.

To specify the targets, the datastore must return cell arrays or tables withnumInputs+numOutputs columns, wherenumInputs and numOutputs are the number of network inputs and outputs, respectively. The firstnumInputs columns correspond to the network inputs. The last numOutput columns correspond to the network outputs. TheInputNames and OutputNames properties of the neural network specify the order of the input and output data, respectively.

You can use other built-in datastores by using the transform andcombine functions. These functions can convert the data read from datastores to the layout required by the testnet function. For example, you can transform and combine data read from in-memory arrays and CSV files using ArrayDatastore andTabularTextDatastore objects, respectively. The required layout of the datastore output depends on the neural network architecture. For more information, see Datastore Customization.

minibatchqueue Object

For greater control over how the software processes and transforms mini-batches, you can specify data as a minibatchqueue object. When you do, the testnet function ignores theMiniBatchSize property of the object and uses the MiniBatchSize name-value argument instead. For minibatchqueue input, thePreprocessingEnvironment property must be"serial".

Note

This argument supports complex-valued predictors and targets.

Feature or tabular data, specified as a numeric array, dlarray object, table, datastore, or minibatchqueue object.

If you have data that fits in memory and does not require additional processing, then specifying the input data as a numeric array or table is usually the best option. To test with feature or tabular data stored on your system, or to apply additional processing such as custom transformations, use a datastore. For neural networks with multiple inputs, you must use a TransformedDatastore or CombinedDatastore object.

Tip

Neural networks expect input data with a specific layout. For example feature classification networks typically expect feature and tabular data representations to be 1-by-c vectors, where c is the number features of the data. Neural networks typically have an input layer that specifies the expected layout of the data.

Most datastores and functions return data in the layout that the network expects. If your data is in a different layout, then indicate the layout by using the InputDataFormats name-value argument or by specifying the input data as a formatted dlarray object. Specifying theInputDataFormats name-value argument is usually easier than adjusting the layout of the input data manually.

For neural networks that do not have input layers, you must use theInputDataFormats name-value argument or formatteddlarray objects.

For more information, see Deep Learning Data Formats.

Numeric Array or dlarray Object

For feature data that fits in memory and does not require additional processing such as custom transformations, you can specify feature data as a numeric array. When you do, you must also specify the targets argument.

The layout of numeric arrays and unformatted dlarray objects must be consistent with the InputDataFormats name-value argument. Most networks with feature input expect input data specified as an_N_-by-numFeatures array, where_N_ is the number of observations andnumFeatures is the number of features of the input data.

Categorical Array (since R2025a)

For discrete features that fit in memory and does not require additional processing like custom transformations, you can specify the feature data as a categorical array.

If you specify features as a categorical array, then you must also specify thetargets argument.

The software automatically converts categorical inputs to numeric values and passes them to the neural network. To specify how the software converts categorical inputs to numeric values, use the CategoricalInputEncoding argument. The layout of categorical arrays must be consistent with the InputDataFormats argument.

Most networks with categorical feature input expect input data specified as a_N_-by-1 vector, where N is the number of observations. After the software converts this data to numeric arrays, data in this layout has the data format "BC" (batch, channel). The size of the"C" (channel) dimension depends on the CategoricalInputEncoding argument.

For data in a different layout, indicate that your data has a different layout by using the InputDataFormats training option or use a formatteddlarray object instead. For more information, see Deep Learning Data Formats.

Table

For feature data that fits in memory and does not require additional processing such as custom transformations, you can specify feature data as a table. When you do, you must not specify the targets argument.

To specify feature data as a table, specify a table withnumObservations rows and numFeatures+1 columns, where numObservations and numFeatures are the number of observations and channels of the input data. Thetestnet function uses the first numFeatures columns as the input features and uses the last column as the targets.

Datastore

A datastores reads batches of feature data and targets. Use a datastores when you have data that does not fit in memory or when you want to apply transformations to the data.

For feature and tabular data, the testnet function supports these datastores.

Data Type Description Example Usage
TransformedDatastore Datastore that transforms batches of data read from an underlying datastore using a custom transformation function. Test neural networks with multiple inputs.Transform datastores with outputs not supported by thetestnet function.Apply custom transformations to datastore output.
CombinedDatastore Datastore that reads from two or more underlying datastores. Test neural networks with multiple inputs.Combine predictors and targets from different data sources.
Custom mini-batch datastore Custom datastore that returns mini-batches of data. Test neural network using data in a layout that other datastores do not support.For details, see Develop Custom Mini-Batch Datastore.

To specify the targets, the datastore must return cell arrays or tables withnumInputs+numOutputs columns, wherenumInputs and numOutputs are the number of network inputs and outputs, respectively. The firstnumInputs columns correspond to the network inputs. The last numOutput columns correspond to the network outputs. TheInputNames and OutputNames properties of the neural network specify the order of the input and output data, respectively.

You can use other built-in datastores for making predictions by using the transform andcombine functions. These functions can convert the data read from datastores to the table or cell array format required by the testnet function. For more information, see Datastore Customization.

minibatchqueue Object

For greater control over how the software processes and transforms mini-batches, you can specify data as a minibatchqueue object. When you do, the testnet function ignores theMiniBatchSize property of the object and uses the MiniBatchSize name-value argument instead. For minibatchqueue input, thePreprocessingEnvironment property must be"serial".

Note

This argument supports complex-valued predictors and targets.

Generic data or combinations of data types, specified as a numeric array,dlarray object, datastore, or minibatchqueue object.

If you have data that fits in memory and does not require additional processing, then specifying the input data as a numeric array is usually the best option. To test with data stored on your system, or to apply additional processing, use a datastores. For neural networks with multiple inputs, you must use a TransformedDatastore or CombinedDatastore object.

Tip

Neural networks expect input data with a specific layout. For example, vector-sequence classification networks typically expect vector-sequence representations to be_t_-by-c arrays, where t and_c_ are the number of time steps and channels of sequences, respectively. Neural networks typically have an input layer that specifies the expected layout of the data.

Most datastores and functions return data in the layout that the network expects. If your data is in a different layout, then indicate the layout by using the InputDataFormats name-value argument or by specifying the input data as a formatted dlarray object. Specifying theInputDataFormats name-value argument is usually easier than adjusting the layout of the input data manually.

For neural networks that do not have input layers, you must use theInputDataFormats name-value argument or formatteddlarray objects.

For more information, see Deep Learning Data Formats.

Numeric Arrays, Categorical Arrays, or dlarray Objects

For data that fits in memory and does not require additional processing like custom transformations, you can specify data as a numeric array, categorical array, or a dlarray object. If you specify data as a numeric array, then you must also specify the targets argument.

For a neural network with an inputLayer object, the expected layout of input data is a given by the InputFormat property of the layer.

The software automatically converts categorical inputs to numeric values and passes them to the neural network. To specify how the software converts categorical inputs to numeric values, use the CategoricalInputEncoding argument. The layout of categorical arrays must be consistent with the InputDataFormats argument.

For data in a different layout, indicate that your data has a different layout by using the InputDataFormats argument or use a formatteddlarray object instead. For more information, see Deep Learning Data Formats.

Datastores

A datastore reads batches of data and targets. Use a datastore when you have data that does not fit in memory or when you want to apply transformations to the data.

The testnet function supports these datastores.

Data Type Description Example Usage
TransformedDatastore Datastore that transforms batches of data read from an underlying datastore using a custom transformation function. Test neural networks with multiple inputs.Transform outputs of datastores not supported bytestnet to have the required format.Apply custom transformations to datastore output.
CombinedDatastore Datastore that reads from two or more underlying datastores. Test neural networks with multiple inputs.Combine predictors and targets from different data sources.
Custom mini-batch datastore Custom datastore that returns mini-batches of data. Test neural network using data in a format that other datastores do not support.For details, see Develop Custom Mini-Batch Datastore.

To specify the targets, the datastore must return cell arrays or tables withnumInputs+numOutputs columns, wherenumInputs and numOutputs are the number of network inputs and outputs, respectively. The firstnumInputs columns correspond to the network inputs. The last numOutput columns correspond to the network outputs. TheInputNames and OutputNames properties of the neural network specify the order of the input and output data, respectively.

You can use other built-in datastores by using the transform andcombine functions. These functions can convert the data read from datastores to the table or cell array format required by testnet. For more information, see Datastore Customization.

minibatchqueue Object

For greater control over how the software processes and transforms mini-batches, you can specify data as a minibatchqueue object. When you do, the testnet function ignores theMiniBatchSize property of the object and uses the MiniBatchSize name-value argument instead. For minibatchqueue input, thePreprocessingEnvironment property must be"serial".

Note

This argument supports complex-valued predictors and targets.

Test targets, specified as a categorical array, numeric array, or cell array of sequences.

To specify targets for networks with multiple outputs, specify both the predictors and targets in a single argument using the images,sequences,features, ordata arguments.

Tip

Metric functions expect data with a specific layout. For example for sequence-to-vector regression networks, the metric function typically expects target vectors to be represented as a 1-by-R vector, where_R_ is the number of responses.

Most datastores and functions output data in the layout that the metric function expects. If your target data is in a different layout than the metric function expects, then indicate that your targets have a different layout by using theTargetDataFormats argument, specifying the data as aminibatchqueue object and specifying theTargetDataFormats property, or by specifying the target data as a formatted dlarray object. It is usually easiest to specify data formats than to preprocess the target data. If you specify both theTargetDataFormats argument and theTargetDataFormats minibatchqueue property, then they must match.

For more information, see Deep Learning Data Formats.

The expected layout of the targets depends on the metric function. The targets listed here are only a subset. The metric functions may support additional targets with different layouts such as targets with additional dimensions. For custom metric functions, the software uses the format information of the network output data to determine the type of target data and applies the corresponding layout in this table.

Target Target Layout
Categorical labels N_-by-1 categorical vector of labels, where_N is the number of observations.
Sequences of categorical labels _t_-by-1-by-N categorical array, where t and N are the numbers of time steps and observations, respectively.N_-by-1 cell array of sequences, where_N is the number of observations. The sequences are _t_-by-1 categorical vectors. The sequences can have different lengths.
Class indices N_-by-1 numeric vector of class indices, where_N is the number of observations.
Sequences of class indices _t_-by-1-by-N matrix of class indices, where t and N are the numbers of time steps and observations, respectively.N_-by-1 cell array of sequences, where_N is the number of observations. The sequences are _t_-by-1 numeric vectors of class indices. The sequences can have different lengths.
Binary labels (single label) _N_-by-1 vector, where N is the number of observations.
Binary labels (multilabel) N_-by-c matrix, where_N and c are the numbers of observations and classes, respectively.
Numeric scalars _N_-by-1 vector, where N is the number of observations.
Numeric vectors N_-by-R matrix, where_N is the number of observations and_R_ is the number of responses.
2-D images _h_-by-_w_-by-c_-by-N numeric array, where h, w, and_c are the height, width, and number of channels of the images, respectively, and N is the number of images.
3-D images _h_-by-_w_-by-_d_-by-c_-by-N numeric array, where h, w,d, and c are the height, width, depth, and number of channels of the images, respectively, and_N is the number of images.
Numeric sequences of scalars t_-by-1-by-N array, where_t and N are the numbers of time steps and sequences, respectively.N_-by-1 cell array of sequences, where_N is the number of sequences. The sequences are_t_-by-1 vectors, where t is the number of time steps. The sequences can have different lengths.
Numeric sequences of vectors t_-by-c_-by-N array, where t, c, and_N are the numbers of time steps, channels, and sequences, respectively.N_-by-1 cell array of sequences, where_N is the number of sequences. The sequences are_t_-by-c matrices, where_t and c are the numbers of time steps and channels of the sequences, respectively. The sequences can have different lengths.
Sequences of 1-D images _h_-by-_c_-by-N_-by-t array, where h, c, and_t are the height, number of channels, and number of time steps of the sequences, respectively, and N is the number of sequences.N_-by-1 cell array of sequences, where_N is the number of sequences. The sequences are_h_-by-c_-by-t arrays, where h, t, and_c are the height, number of time steps, and number of channels of the sequences, respectively. The sequences can have different lengths.
Sequences of 2-D images _h_-by-_w_-by-_c_-by-_N_-by-t array, where h, w,c, and t are the height, width, number of channels, and number of time steps of the sequences, respectively, and N is the number of sequences.N_-by-1 cell array of sequences, where_N is the number of sequences. The sequences are_h_-by-_w_-by-_c_-by-t arrays, where h, w,t, and c are the height, width, number of time steps, and number of channels of the sequences, respectively. The sequences can have different lengths.
Sequences of 3-D images _h_-by-_w_-by-_d_-by-_c_by-_N_-t array, where h, w,d, c, and t are the height, width, depth, number of channels, and number of time steps of the sequences, respectively, and N is the number of sequences.N_-by-1 cell array of sequences, where_N is the number of sequences. The sequences are_h_-by-_w_-by-_d_-by-_c_-by-t arrays, where h, w,d, t, and c are the height, width, depth, number of time steps, and number of channels of the sequences, respectively. The sequences can have different lengths.

For targets in a different layout, indicate that your targets have a different layout by using the TargetDataFormats name-value argument or a formatted dlarray object. For more information, see Deep Learning Data Formats.

The software automatically converts categorical targets to numeric values and passes them to the metrics function. To specify how the software converts categorical targets to numeric values, use the CategoricalTargetEncoding argument.

Metrics to evaluate, specified as a character vector or string scalar of a built-in metric name, a string array of names, a built-in or custom metric object, a function handle, a deep.DifferentiableFunction object, or a cell array of names, metric objects, and function handles:

For more information about deep learning metrics and loss functions, see Deep Learning Metrics.

If you specify a metric as a function handle or a custom metric object, then the layout of the targets that the software passes to the metric depends on the data type of the targets and other metrics that you specify:

Example: ["accuracy","fscore"]

Example: {"accuracy",@myMetric}

Name-Value Arguments

collapse all

Specify optional pairs of arguments asName1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Example: testnet(net,data,"accuracy",InputDataFormats="CB") tests the accuracy of the neural network and specifies that the first and second dimension of the input data correspond to the channel and batch dimensions, respectively.

Layout of output metric values, specified as "vector" or"table".

Size of mini-batches to use for prediction, specified as a positive integer. Larger mini-batch sizes require more memory, but can lead to faster predictions.

When you make predictions with sequences of different lengths, the mini-batch size can affect the amount of padding added to the input data, which can result in different predicted values. Try using different values to see which works best with your network. To specify padding options, use the SequenceLength name-value argument.

Note

If you specify the input data as a minibatchqueue object, then the testnet function uses the mini-batch size specified by this argument and not the MiniBatchSize property of theminibatchqueue object.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Option to pad or truncate the input sequences, specified as one of these options:

To learn more about the effects of padding and truncating the input sequences, seeSequence Padding and Truncation.

Direction of padding or truncation, specified as one of these options:

Recurrent layers process sequence data one time step at a time, so when the recurrent layerOutputMode property is "last", any padding in the final time steps can negatively influence the layer output. To pad or truncate sequence data on the left, set the SequencePaddingDirection name-value argument to"left".

For sequence-to-sequence neural networks (when the OutputMode property is"sequence" for each recurrent layer), any padding in the first time steps can negatively influence the predictions for the earlier time steps. To pad or truncate sequence data on the right, set the SequencePaddingDirection name-value argument to "right".

To learn more about the effects of padding and truncating sequences, see Sequence Padding and Truncation.

Value by which to pad the input sequences, specified as a scalar.

Do not pad sequences with NaN, because doing so can propagate errors through the neural network.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Hardware resource, specified as one of these values:

Performance optimization, specified as one of these values:

When you use the "auto" or "mex" option, the software can offer performance benefits at the expense of an increased initial run time. Subsequent calls to the function are typically faster. Use performance optimization when you call the function multiple times using different input data.

When Acceleration is "mex", the software generates and executes a MEX function based on the model and parameters you specify in the function call. A single model can have several associated MEX functions at one time. Clearing the model variable also clears any MEX functions associated with that model.

When Acceleration is"auto", the software does not generate a MEX function.

The "mex" option is available only when you use a GPU. You must have a C/C++ compiler installed and the GPU Coder™ Interface for Deep Learning support package. Install the support package using the Add-On Explorer in MATLAB®. For setup instructions, see Set Up Compiler (GPU Coder). GPU Coder is not required.

The "mex" option has these limitations:

For quantized networks, the "mex" option requires a CUDA® enabled NVIDIA® GPU with compute capability 6.1, 6.3, or higher.

Encoding of categorical inputs, specified as one of these values:

If you covert categorical inputs to their integer value, then the network must have one input channel for each of the categorical inputs. Otherwise, the network must have numCategories channels for each of the categorical inputs, where numCategories is the number of categories of the corresponding categorical input

Encoding of categorical targets for custom metrics, specified as one of these values:

Before R2025a: If metrics contains"index-crossentropy" and does not contain"crossentropy", then the software automatically converts the targets to numeric class indices and passes them to the metric. Otherwise, if the targets are categorical arrays, then the software automatically converts the targets to one-hot encoded vectors and then passes them to the metric.

Description of the input data dimensions, specified as a string array, character vector, or cell array of character vectors.

If InputDataFormats is "auto", then the software uses the formats expected by the network input. Otherwise, the software uses the specified formats for the corresponding network input.

A data format is a string of characters, where each character describes the type of the corresponding data dimension.

The characters are:

For example, consider an array that represents a batch of sequences where the first, second, and third dimensions correspond to channels, observations, and time steps, respectively. You can describe the data as having the format "CBT" (channel, batch, time).

You can specify multiple dimensions labeled "S" or "U". You can use the labels "C", "B", and"T" once each, at most. The software ignores singleton trailing"U" dimensions after the second dimension.

For a neural networks with multiple inputs net, specify an array of input data formats, where InputDataFormats(i) corresponds to the input net.InputNames(i).

For more information, see Deep Learning Data Formats.

Data Types: char | string | cell

Description of the target data dimensions, specified as one of these values:

A data format is a string of characters, where each character describes the type of the corresponding data dimension.

The characters are:

For example, consider an array that represents a batch of sequences where the first, second, and third dimensions correspond to channels, observations, and time steps, respectively. You can describe the data as having the format "CBT" (channel, batch, time).

You can specify multiple dimensions labeled "S" or "U". You can use the labels "C", "B", and"T" once each, at most. The software ignores singleton trailing"U" dimensions after the second dimension.

For more information, see Deep Learning Data Formats.

Data Types: char | string | cell

Output Arguments

collapse all

Evaluated metric values, returned as a numeric vector or a table.

The data type of metricValues depends on theOutputMode name-value argument.

Extended Capabilities

expand all

The testnet function fully supports GPU acceleration.

By default, testnet uses a GPU if one is available. Ifnet is a quantized network with the TargetLibrary property set to "none", testnet uses the CPU even if a GPU is available. You can specify the hardware that thetestnet function uses by specifying the ExecutionEnvironment name-value argument.

For more information, see Run MATLAB Functions on a GPU (Parallel Computing Toolbox).

Version History

Introduced in R2024b

expand all

Starting in R2025a, you can now evaluate the network on test data using the_R2_ metric.

To use this metric, you can specify either "rsquared" or, if you require greater customization, you can create an RSquaredMetric object.