instanceSegmentationMetrics - Instance segmentation quality metrics - MATLAB (original) (raw)

Instance segmentation quality metrics

Since R2022b

Description

Use the instanceSegmentationMetrics object and its object functions to evaluate the quality of instance segmentation results.

An instanceSegmentationMetrics object stores instance segmentation quality metrics for a set of images, such as the average precision (AP) and precision and recall, computed per class and per image. To compute the AP and precision recall metrics, pass the instanceSegmentationMetrics object to the averagePrecision or the precisionRecall object functions, respectively. To compute the confusion matrix, pass theinstanceSegmentationMetrics object to the confusionMatrix object function. Evaluate the summary of all metrics across all classes and all images in the data set using the summarize object function.

Properties

expand all

This property is read-only.

Metrics per class, specified as a table with C rows, where_C_ is the number of classes in the instance segmentation.ClassMetrics has these columns, corresponding to these instance segmentation metrics:

This property is read-only.

Metrics for each image in the data set, specified as a table with_numImages_ rows, where numImages is the number of images in the data set. ImageMetrics has these columns, corresponding to these instance segmentation metrics:

Class names of segmented objects, specified as an array of strings.

Example: ["sky" "grass" "building" "sidewalk"]

Overlap threshold, specified as a numeric scalar or numeric vector. When the intersection over union (IoU) of the pixels in the predicted object mask and ground truth object mask is equal to or greater than the overlap threshold, the prediction is considered a true positive.

IoU, or the Jaccard Index, is the number of pixels in the intersection of the binary masks divided by the number of pixels in the union of the masks. In other words, IoU is the ratio of correctly classified pixels to the total number of pixels that are assigned that class by the ground truth and the predictor.

Object Functions

averagePrecision Evaluate average precision metric of instance segmentation results
confusionMatrix Compute confusion matrix of instance segmentation results
precisionRecall Get precision recall metrics of instance segmentation results
summarize Summarize instance segmentation performance metrics at data set and class level
metricsByArea Evaluate instance segmentation across object mask size ranges

Version History

Introduced in R2022b

expand all

The ConfusionMatrix,NormalizedConfusionMatrix, and DatasetMetrics properties of the instanceSegmentationMetrics object have been removed.

To update your code to compute the confusion matrix, replace instances of theConfusionMatrix and NormalizedConfusionMatrix properties with the confusionMatrix object function.

To compute the summary of the instance segmentation quality metrics over the entire data set or over each class, use the summarize object function.

To compute precision, recall, and confidence scores for all classes in the data set, or at specified classes and overlap thresholds, use the precisionRecall object function.

To compute average precision (AP) for all classes and overlap thresholds in the data set, or specify the classes and overlap thresholds for which to compute AP, use theaveragePrecision object function.

For an example that uses the new instanceSegmentationMetrics object functions to evaluate instance segmentation results, see the Perform Instance Segmentation Using SOLOv2 example.

These table columns of the ClassMetrics andImageMetrics properties have been renamed.

instanceSegmentationMetrics Property Renamed Columns
ClassMetrics mAP, or the average precision (AP) averaged over all overlap thresholds for each class, has been renamed toAPOverlapAvg.mLAMR, or the log-average miss rate for each class averaged over all specified overlap thresholds, has been renamed toLAMROverlapAvg.mAOS, or the average orientation similarity for each class averaged over all the specified overlap thresholds, has been renamed to AOSOverlapAvg.
ImageMetrics AP, or the AP across all classes at each overlap threshold, has been renamed to mAP.mAP, or the AP averaged across all classes and all overlap thresholds, has been renamed tomAPOverlapAvg.mLAMR, or the log-average miss rate for each class averaged over all specified overlap thresholds, has been renamed toLAMROverlapAvg.mAOS, or the average orientation similarity for each class averaged over all the specified overlap thresholds, has been renamed to AOSOverlapAvg.