semanticSegmentationMetrics - Semantic segmentation quality metrics - MATLAB (original) (raw)
Semantic segmentation quality metrics
Description
A semanticSegmentationMetrics
object encapsulates semantic segmentation quality metrics for a set of images.
Properties
This property is read-only.
Confusion matrix, specified as a table with C rows and columns, where C is the number of classes in the semantic segmentation. Each table element (i,j) is the count of pixels known to belong to class i but predicted to belong to class j.
This property is read-only.
Normalized confusion matrix, specified as a table with_C_ rows and columns, where C is the number of classes in the semantic segmentation. TheNormalizedConfusionMatrx
represents a confusion matrix normalized by the number of pixels known to belong to each class. Each table element (i,j) is the count of pixels known to belong to class i but predicted to belong to class j, divided by the total number of pixels predicted in class i. Elements are in the range [0, 1].
This property is read-only.
Semantic segmentation metrics aggregated over the data set, specified as a table with one row. DataSetMetrics
has up to five columns, corresponding to the metrics that were specified by the'Metrics'
name-value pair used with evaluateSemanticSegmentation:
GlobalAccuracy
— Ratio of correctly classified pixels to total pixels, regardless of class.MeanAccuracy
— Ratio of correctly classified pixels in each class to total pixels, averaged over all classes. The value is equal to the mean ofClassMetrics.Accuracy
.MeanIoU
— Average intersection over union (IoU) of all classes. The value is equal to the mean ofClassMetrics.IoU
.WeightedIoU
— Average IoU of all classes, weighted by the number of pixels in the class.MeanBFScore
— Average boundary F1 (BF) score of all images. The value is equal to the mean ofImageMetrics.BFScore
. This metric is not available when you create asemanticSegmentationMetrics
object by using a confusion matrix as the input toevaluateSemanticSegmentation
.
Note
A value of NaN
in the dataset, class, or image metrics, indicates that one or more classes were missing during the computation of the metrics when using the evaluateSemanticSegmentation function. In this case, the software was unable to accurately compute the metrics.
The missing classes can be found by looking at theClassMetrics
property, which provides the metrics for each class. To more accurately evaluate your network, augment your ground truth with more data that includes the missing classes.
This property is read-only.
Semantic segmentation metrics for each class, specified as a table with_C_ rows, where C is the number of classes in the semantic segmentation. ClassMetrics
has up to three columns, corresponding to the metrics that were specified by the'Metrics'
name-value pair used with evaluateSemanticSegmentation:
Accuracy
— Ratio of correctly classified pixels in each class to the total number of pixels belonging to that class according to the ground truth. Accuracy can be expressed as:Accuracy
= (TP + TN ) / (TP + TN + FP + FN)
| | Positive | Negative | |
| ------------ | ------------------ | ------------------ |
| Positive | TP: True Positive | FN: False Negative |
| Negative | FP: False Positive | TN: True Negative |
TP: True positives and FN is the number of false negatives.IoU
— Ratio of correctly classified pixels to the total number of pixels that are assigned that class by the ground truth and the predictor. IoU can be expressed as:IoU
= TP / (TP + FP + FN)
The image describes the true positives (TP), false positives (FP), and false negatives (FN).MeanBFScore
— Boundary F1 score for each class, averaged over all images. This metric is not available when you create asemanticSegmentationMetrics
object by using a confusion matrix as the input toevaluateSemanticSegmentation
.
This property is read-only.
Semantic segmentation metrics for each image in the data set, specified as a table with N rows, where N is the number of images in the data set. ImageMetrics
has up to five columns, corresponding to the metrics that were specified by the'Metrics'
name-value pair used with evaluateSemanticSegmentation:
GlobalAccuracy
— Ratio of correctly classified pixels to total pixels, regardless of class.MeanAccuracy
— Ratio of correctly classified pixels to total pixels, averaged over all classes in the image.MeanIoU
— Average IoU of all classes in the image.WeightedIoU
— Average IoU of all classes in the image, weighted by the number of pixels in each class.MeanBFScore
— Average BF score of each class in the image. This metric is not available when you create asemanticSegmentationMetrics
object by using a confusion matrix as the input toevaluateSemanticSegmentation
.
Each image metric returns a vector, with one element for each image in the data set. The order of the rows matches the order of the images defined by the input PixelLabelDatastore objects representing the data set.
Examples
The triangleImages
data set has 100 test images with ground truth labels. Define the location of the data set.
dataSetDir = fullfile(toolboxdir("vision"),"visiondata","triangleImages");
Define the location of the test images and ground truth labels.
testImagesDir = fullfile(dataSetDir,"testImages"); testLabelsDir = fullfile(dataSetDir,"testLabels");
Create an imageDatastore
holding the test images.
imds = imageDatastore(testImagesDir);
Define the class names and their associated label IDs.
classNames = ["triangle" "background"]; labelIDs = [255 0];
Create a pixelLabelDatastore
holding the ground truth pixel labels for the test images.
pxdsTruth = pixelLabelDatastore(testLabelsDir,classNames,labelIDs);
Load a semantic segmentation network that has been trained on the training images of triangleImages
.
net = load("triangleSegmentationNetwork"); net = net.net;
Run the network on the test images. Predicted labels are written to disk in a temporary directory and returned as a pixelLabelDatastore
.
pxdsResults = semanticseg(imds,net,Classes=classNames,WriteLocation=tempdir);
Running semantic segmentation network
- Processed 100 images.
Evaluate the prediction results against the ground truth.
metrics = evaluateSemanticSegmentation(pxdsResults,pxdsTruth);
Evaluating semantic segmentation results
Selected metrics: global accuracy, class accuracy, IoU, weighted IoU, BF score.
Processed 100 images.
Finalizing... Done.
Data set metrics:
GlobalAccuracy MeanAccuracy MeanIoU WeightedIoU MeanBFScore
0.99074 0.99183 0.91118 0.98299 0.80563
Display the properties of the semanticSegmentationMetrics
object.
metrics = semanticSegmentationMetrics with properties:
ConfusionMatrix: [2×2 table]
NormalizedConfusionMatrix: [2×2 table]
DataSetMetrics: [1×5 table]
ClassMetrics: [2×3 table]
ImageMetrics: [100×5 table]
Display the classification accuracy, the intersection over union, and the boundary F-1 score for each class. These values are stored in the ClassMetrics
property.
ans=2×3 table Accuracy IoU MeanBFScore ________ _______ ___________
triangle 0.99302 0.83206 0.67208
background 0.99063 0.9903 0.93918
Display the normalized confusion matrix that is stored in the NormalizedConfusionMatrix
property.
ans=2×2 table triangle background ________ __________
triangle 4697 33
background 915 96755
Version History
Introduced in R2017b