Add New Metrics — MMSegmentation 1.2.2 documentation (original) (raw)

Develop with the source code of MMSegmentation

Here we show how to develop a new metric with an example of CustomMetric as the following.

  1. Create a new file mmseg/evaluation/metrics/custom_metric.py.
    from typing import List, Sequence
    from mmengine.evaluator import BaseMetric
    from mmseg.registry import METRICS
    @METRICS.register_module()
    class CustomMetric(BaseMetric):
    def init(self, arg1, arg2):
    """
    The metric first processes each batch of data_samples and predictions,
    and appends the processed results to the results list. Then it
    collects all results together from all ranks if distributed training
    is used. Finally, it computes the metrics of the entire dataset.
    """
    def process(self, data_batch: dict, data_samples: Sequence[dict]) -> None:
    pass
    def compute_metrics(self, results: list) -> dict:
    pass
    def evaluate(self, size: int) -> dict:
    pass

In the above example, CustomMetric is a subclass of BaseMetric. It has three methods: process, compute_metrics and evaluate.

  1. Import the new metric in mmseg/evaluation/metrics/__init__.py.
    from .custom_metric import CustomMetric
    all = ['CustomMetric', ...]
  2. Add the new metric to the config file.
    val_evaluator = dict(type='CustomMetric', arg1=xxx, arg2=xxx)
    test_evaluator = dict(type='CustomMetric', arg1=xxx, arg2=xxx)

Develop with the released version of MMSegmentation

The above example shows how to develop a new metric with the source code of MMSegmentation. If you want to develop a new metric with the released version of MMSegmentation, you can follow the following steps.

  1. Create a new file /Path/to/metrics/custom_metric.py, implement the process, compute_metrics and evaluate methods, evaluate method is optional.
  2. Import the new metric in your code or config file.
    from path.to.metrics import CustomMetric
    or
    custom_imports = dict(imports=['/Path/to/metrics'], allow_failed_imports=False)
    val_evaluator = dict(type='CustomMetric', arg1=xxx, arg2=xxx)
    test_evaluator = dict(type='CustomMetric', arg1=xxx, arg2=xxx)