trainPatchCoreAnomalyDetector - Train PatchCore anomaly detection network - MATLAB (original) (raw)

Train PatchCore anomaly detection network

Since R2023a

Syntax

Description

[detector](#mw%5Fce360447-aed9-41cc-95ff-3fdecca028a8%5Fsep%5Fmw%5F457309ae-bc19-403a-a9f7-ff461c0c6227) = trainPatchCoreAnomalyDetector([normalData](#mw%5Fce360447-aed9-41cc-95ff-3fdecca028a8%5Fsep%5Fmw%5F4f277c6a-dc2e-4522-84d5-9c63f6f3c319),[detectorIn](#mw%5Fce360447-aed9-41cc-95ff-3fdecca028a8%5Fsep%5Fmw%5F26f65eb4-0f2f-40cb-8409-6f94ca766617)) trains the input PatchCore anomaly detection network detectorIn. The training data consists of normal images in normalData.

Note

This functionality requires Deep Learning Toolbox™ and the Automated Visual Inspection Library for Computer Vision Toolbox™. You can install the Automated Visual Inspection Library for Computer Vision Toolbox from Add-On Explorer. For more information about installing add-ons, see Get and Manage Add-Ons.

Note

It is recommended that you also have Parallel Computing Toolbox™ to use with a CUDA®-enabled NVIDIA® GPU. For information about the supported compute capabilities, see GPU Computing Requirements (Parallel Computing Toolbox).

example

[detector](#mw%5Fce360447-aed9-41cc-95ff-3fdecca028a8%5Fsep%5Fmw%5F457309ae-bc19-403a-a9f7-ff461c0c6227) = trainPatchCoreAnomalyDetector(___,[Name=Value](#namevaluepairarguments)) specifies options that control aspects of network creation and training as one or more name-value arguments, in addition to all input arguments from the previous syntax.

Examples

collapse all

Load a data set that consists of images of digits from 0 to 9. Consider images of the digit 8 to be normal, and all other digits to be anomalous.

dataDir = fullfile(toolboxdir("vision"),"visiondata","digits","synthetic"); dsNormal = imageDatastore(fullfile(dataDir,"8"));

Create a patchCoreAnomalyDetector object.

untrainedDetector = patchCoreAnomalyDetector(Backbone="resnet18");

Train the anomaly detector.

detector = trainPatchCoreAnomalyDetector(dsNormal,untrainedDetector);

Computing Input Normalization Statistics. Training PatchCore Model.

Step 1: Computing embeddings for each minibatch Done creating uncompressed train embeddings. Step 2: Compress train embeddings at 0.1 factor to create a coreset...

Computing coreset samples Done compressing train embeddings (coreset). Done training PatchCore model.

Set the anomaly threshold of the detector using a calibration data set.

calDir = fullfile(toolboxdir("vision"),"visiondata","digits","handwritten"); dsCal = imageDatastore(calDir,IncludeSubfolders=true,LabelSource="foldernames"); gtLabels = dsCal.Labels; anomalyLabels = setdiff(string(0:9),"8"); scores = predict(detector,dsCal); [T,roc] = anomalyThreshold(gtLabels,scores,anomalyLabels)

roc = rocmetrics with properties:

Metrics: [121×4 table]

Properties, Methods

Input Arguments

collapse all

Training data, specified as a datastore. The training data consists of samples of normal images. Do not include anomaly images in the training data.

Name-Value Arguments

collapse all

Specify optional pairs of arguments asName1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Example: trainPatchCoreAnomalyDetector(normalData,detectorIn,CompressionRatio=0.1) specifies that the detector must use only 10% of the features extracted from training images.

Compression ratio, specified as a number in the range [0, 1]. This value indicates the decimal amount of compression used on the features extracted from training images to construct a memory bank. For example, a compression ratio of 0.1 specifies that the detector must use only 10% of the features. This ratio is a trade off between memory and accuracy. A smaller CompressionRatio value increases compression, but decreases accuracy.

Size of the mini-batch to use for each training iteration, specified as a positive integer. A mini-batch is a subset of the training set that the training function uses to evaluate the gradient of the loss function and update the weights.

Hardware resource for running the neural network, specified as one of these options:

For more information on when to use the different execution environments, see Scale Up Deep Learning in Parallel, on GPUs, and in the Cloud (Deep Learning Toolbox).

Reset input layer normalization using training images, specified as a logical1 (true) or 0 (false):

Subsampling method used to reduce redundancy in the feature bank that occurs during training, specified as one of these options:

The greedy coreset subsampling method has higher accuracy at the expense of slower run time. The random subsampling method can increase run speed at the expense of a decrease in comparative accuracy. When using the random subsampling method, increasing the number of training images can improve accuracy, but also increases the model size and inference time.

Display training progress information in the command window, specified as a logical 1 (true) or 0 (false).

Output Arguments

Tips

Version History

Introduced in R2023a

expand all

Support for using MATLAB® Compiler™ will be removed in a future release.

Specify the subsampling method by using the SubsamplingStrategy name-value argument. For example,SubsamplingStrategy="greedycoreset" specifies the greedy coreset subsampling method.