Evaluator — mmengine 0.10.7 documentation (original) (raw)

class mmengine.evaluator.Evaluator(metrics)[source]

Wrapper class to compose multiple BaseMetric instances.

Parameters:

metrics (dict or BaseMetric or Sequence) – The config of metrics.

property dataset_meta_: dict | None_

Meta info of the dataset.

Type:

Optional[dict]

evaluate(size)[source]

Invoke evaluate method of each metric and collect the metrics dictionary.

Parameters:

size (int) – Length of the entire validation dataset. When batch size > 1, the dataloader may pad some data samples to make sure all ranks have the same length of dataset slice. Thecollect_results function will drop the padded data based on this size.

Returns:

Evaluation results of all metrics. The keys are the names of the metrics, and the values are corresponding results.

Return type:

dict

offline_evaluate(data_samples, data=None, chunk_size=1)[source]

Offline evaluate the dumped predictions on the given data .

Parameters:

process(data_samples, data_batch=None)[source]

Convert BaseDataSample to dict and invoke process method of each metric.

Parameters: