instanceSegmentationMetrics - Instance segmentation quality metrics - MATLAB (original) (raw)
Instance segmentation quality metrics
Since R2022b
Description
Use the instanceSegmentationMetrics
object and its object functions to evaluate the quality of instance segmentation results.
An instanceSegmentationMetrics
object stores instance segmentation quality metrics for a set of images, such as the average precision (AP) and precision and recall, computed per class and per image. To compute the AP and precision recall metrics, pass the instanceSegmentationMetrics
object to the averagePrecision or the precisionRecall object functions, respectively. To compute the confusion matrix, pass theinstanceSegmentationMetrics
object to the confusionMatrix object function. Evaluate the summary of all metrics across all classes and all images in the data set using the summarize object function.
Properties
This property is read-only.
Metrics per class, specified as a table with C rows, where_C_ is the number of classes in the instance segmentation.ClassMetrics
has these columns, corresponding to these instance segmentation metrics:
mAP
— Mean average precision calculated by averaging over all thresholds specified in the threshold argument for a class.AP
— Average precision calculated for a class at each overlap threshold in[OverlapThreshold](instancesegmentationmetrics.html#mw%5F9cbf1b50-b454-4913-ae36-8ce7985991a0)
, returned as a_numThresh_-by-1 array, where numThresh is the number of overlap thresholds.Precision
— Precision values, returned as a_numThresh_-by-(numPredictions+1) matrix, where numPredictions is the number of predicted object masks. Precision is the ratio of the number of true positives (TP) and the total number of predicted positives.
Precision = TP / (TP +FP)
FP is the number of false positives, or misclassified pixels that comprise object instances. Larger precision scores imply that most object instance classes match ground truth objects.Recall
— Recall values, returned as a_numThresh_-by-(numPredictions+1) matrix, where numPredictions is the number of predicted object masks. Recall is the ratio of the number of true positives (TP) and the total number of ground truth positives.
Recall = TP / (TP +FN)
FN is the number of false negatives, or the number of pixels that comprise instances that have been misclassified as background pixels. Larger recall scores imply that most ground truth objects are detected.
This property is read-only.
Metrics for each image in the data set, specified as a table with_numImages_ rows, where numImages is the number of images in the data set. ImageMetrics
has these columns, corresponding to these instance segmentation metrics:
NumObjects
— Number of objects in the ground truth data in each image, stored as a positive integer.mAP
— Mean average precision (mAP), calculated by averaging the average precision (AP) across all classes at each overlap threshold in the[OverlapThreshold](objectdetectionmetrics.html#mw%5Feca15534-c978-4775-b162-db632406f5f1)
property, stored as a_numThresh_-by-1 numeric vector. numThresh is the number of overlap thresholds. Specify the overlap thresholds for an image using the threshold argument.mAPOverlapAvg
— Mean average precision (mAP), calculated by averaging the AP across all classes and all overlap thresholds specified by the[OverlapThreshold](objectdetectionmetrics.html#mw%5Feca15534-c978-4775-b162-db632406f5f1)
property, stored as a numeric scalar.
Class names of segmented objects, specified as an array of strings.
Example: ["sky" "grass" "building" "sidewalk"]
Overlap threshold, specified as a numeric scalar or numeric vector. When the intersection over union (IoU) of the pixels in the predicted object mask and ground truth object mask is equal to or greater than the overlap threshold, the prediction is considered a true positive.
IoU, or the Jaccard Index, is the number of pixels in the intersection of the binary masks divided by the number of pixels in the union of the masks. In other words, IoU is the ratio of correctly classified pixels to the total number of pixels that are assigned that class by the ground truth and the predictor.
Object Functions
averagePrecision | Evaluate average precision metric of instance segmentation results |
---|---|
confusionMatrix | Compute confusion matrix of instance segmentation results |
precisionRecall | Get precision recall metrics of instance segmentation results |
summarize | Summarize instance segmentation performance metrics at data set and class level |
metricsByArea | Evaluate instance segmentation across object mask size ranges |
Version History
Introduced in R2022b
The ConfusionMatrix
,NormalizedConfusionMatrix
, and DatasetMetrics
properties of the instanceSegmentationMetrics
object have been removed.
To update your code to compute the confusion matrix, replace instances of theConfusionMatrix
and NormalizedConfusionMatrix
properties with the confusionMatrix object function.
To compute the summary of the instance segmentation quality metrics over the entire data set or over each class, use the summarize object function.
To compute precision, recall, and confidence scores for all classes in the data set, or at specified classes and overlap thresholds, use the precisionRecall object function.
To compute average precision (AP) for all classes and overlap thresholds in the data set, or specify the classes and overlap thresholds for which to compute AP, use theaveragePrecision object function.
For an example that uses the new instanceSegmentationMetrics
object functions to evaluate instance segmentation results, see the Perform Instance Segmentation Using SOLOv2 example.
These table columns of the ClassMetrics
andImageMetrics
properties have been renamed.
instanceSegmentationMetrics Property | Renamed Columns |
---|---|
ClassMetrics | mAP, or the average precision (AP) averaged over all overlap thresholds for each class, has been renamed toAPOverlapAvg.mLAMR, or the log-average miss rate for each class averaged over all specified overlap thresholds, has been renamed toLAMROverlapAvg.mAOS, or the average orientation similarity for each class averaged over all the specified overlap thresholds, has been renamed to AOSOverlapAvg. |
ImageMetrics | AP, or the AP across all classes at each overlap threshold, has been renamed to mAP.mAP, or the AP averaged across all classes and all overlap thresholds, has been renamed tomAPOverlapAvg.mLAMR, or the log-average miss rate for each class averaged over all specified overlap thresholds, has been renamed toLAMROverlapAvg.mAOS, or the average orientation similarity for each class averaged over all the specified overlap thresholds, has been renamed to AOSOverlapAvg. |