LogProcessor — mmengine 0.11.0rc0 documentation (original) (raw)
class mmengine.runner.LogProcessor(window_size=10, by_epoch=True, custom_cfg=None, num_digits=4, log_with_hierarchy=False, mean_pattern='.*(loss|time|data_time|grad_norm).*')[source]¶
A log processor used to format log information collected fromrunner.message_hub.log_scalars.
LogProcessor instance is built by runner and will formatrunner.message_hub.log_scalars to tag and log_str, which can directly used by LoggerHook and MMLogger. Besides, the argumentcustom_cfg of constructor can control the statistics method of logs.
Parameters:
- window_size (int) – default smooth interval. Defaults to 10.
- by_epoch (bool) – Whether to format logs with epoch stype. Defaults to True.
- custom_cfg (list_[_dict] , optional) –
Contains multiple log config dict, in which key means the data source name of log and value means the statistic method and corresponding arguments used to count the data source. Defaults to None.- If custom_cfg is None, all logs will be formatted via default methods, such as smoothing loss by default window_size. If custom_cfg is defined as a list of config dict, for example: [dict(data_src=’loss’, method=’mean’, log_name=’global_loss’, window_size=’global’)]. It means the log item
losswill be counted as global mean and additionally logged asglobal_loss(defined bylog_name). Iflog_nameis not defined in config dict, the original logged key will be overwritten. - The original log item cannot be overwritten twice. Here is an error example: [dict(data_src=’loss’, method=’mean’, window_size=’global’), dict(data_src=’loss’, method=’mean’, window_size=’epoch’)]. Both log config dict in custom_cfg do not have
log_namekey, which means the loss item will be overwritten twice. - For those statistic methods with the
window_sizeargument, ifby_epochis set to False,windows_sizeshould not beepoch to statistics log value by epoch.
- If custom_cfg is None, all logs will be formatted via default methods, such as smoothing loss by default window_size. If custom_cfg is defined as a list of config dict, for example: [dict(data_src=’loss’, method=’mean’, log_name=’global_loss’, window_size=’global’)]. It means the log item
- num_digits (int) – The number of significant digit shown in the logging message. Defaults to 4.
- log_with_hierarchy (bool) – Whether to log with hierarchy. If it is True, the information is written to visualizer backend such as
LocalVisBackendandTensorboardBackendwith hierarchy. For example,losswill be saved astrain/loss, and accuracy will be saved asval/accuracy. Defaults to False.New in version 0.7.0. - mean_pattern (str) – This is a regular expression used to match the log that need to be included in the smoothing statistics.New in version 0.7.3.
Examples
log_nameis defined,loss_large_windowwill be an additionalrecord.
log_processor = dict( window_size=10, by_epoch=True, custom_cfg=[dict(data_src='loss', log_name='loss_large_window', method_name='mean', window_size=100)])
log_nameis not defined.losswill be overwritten.log_processor = dict( window_size=10, by_epoch=True, custom_cfg=[dict(data_src='loss', method_name='mean', window_size=100)])
Record loss with different statistics methods.
log_processor = dict( window_size=10, by_epoch=True, custom_cfg=[dict(data_src='loss', log_name='loss_large_window', method_name='mean', window_size=100), dict(data_src='loss', method_name='mean', window_size=100)])
Overwrite loss item twice will raise an error.
log_processor = dict( window_size=10, by_epoch=True, custom_cfg=[dict(data_src='loss', method_name='mean', window_size=100), dict(data_src='loss', method_name='max', window_size=100)]) AssertionError
get_log_after_epoch(runner, batch_idx, mode, with_non_scalar=False)[source]¶
Format log string after validation or testing epoch.
Parameters:
- runner (Runner) – The runner of validation/testing phase.
- batch_idx (int) – The index of the current batch in the current loop.
- mode (str) – Current mode of runner.
- with_non_scalar (bool) – Whether to include non-scalar infos in the returned tag. Defaults to False.
Returns:
Formatted log dict/string which will be recorded by runner.message_hub and runner.visualizer.
Return type:
get_log_after_iter(runner, batch_idx, mode)[source]¶
Format log string after training, validation or testing iteration.
Parameters:
- runner (Runner) – The runner of training phase.
- batch_idx (int) – The index of the current batch in the current loop.
- mode (str) – Current mode of runner, train, test or val.
Returns:
Formatted log dict/string which will be recorded by runner.message_hub and runner.visualizer.
Return type: