MessageHub — mmengine 0.10.7 documentation (original) (raw)
class mmengine.logging.MessageHub(name, log_scalars=None, runtime_info=None, resumed_keys=None)[source]¶
Message hub for component interaction. MessageHub is created and accessed in the same way as ManagerMixin.
MessageHub
will record log information and runtime information. The log information refers to the learning rate, loss, etc. of the model during training phase, which will be stored as HistoryBuffer
. The runtime information refers to the iter times, meta information of runner etc., which will be overwritten by next update.
Parameters:
- name (str) – Name of message hub used to get corresponding instance globally.
- log_scalars (dict, optional) – Each key-value pair in the dictionary is the name of the log information such as “loss”, “lr”, “metric” and their corresponding values. The type of value must be HistoryBuffer. Defaults to None.
- runtime_info (dict, optional) – Each key-value pair in the dictionary is the name of the runtime information and their corresponding values. Defaults to None.
- resumed_keys (dict, optional) – Each key-value pair in the dictionary decides whether the key in
_log_scalars
and_runtime_info
will be serialized.
Note
Key in _resumed_keys
belongs to _log_scalars
or_runtime_info
. The corresponding value cannot be set repeatedly.
Examples
create empty
MessageHub
.message_hub1 = MessageHub('name') log_scalars = dict(loss=HistoryBuffer()) runtime_info = dict(task='task') resumed_keys = dict(loss=True)
create
MessageHub
from data.message_hub2 = MessageHub( name='name', log_scalars=log_scalars, runtime_info=runtime_info, resumed_keys=resumed_keys)
classmethod get_current_instance()[source]¶
Get latest created MessageHub
instance.
MessageHub can call get_current_instance() before any instance has been created, and return a message hub with the instance name “mmengine”.
Returns:
Empty MessageHub
instance.
Return type:
get_info(key, default=None)[source]¶
Get runtime information by key. If the key does not exist, this method will return default information.
Parameters:
- key (str) – Key of runtime information.
- default (Any , optional) – The default returned value for the given key.
Returns:
A copy of corresponding runtime information if the key exists.
Return type:
Any
Get HistoryBuffer
instance by key.
Note
Considering the large memory footprint of history buffers in the post-training, get_scalar() will not return a reference of history buffer rather than a copy.
Parameters:
key (str) – Key of HistoryBuffer
.
Returns:
Corresponding HistoryBuffer
instance if the key exists.
Return type:
load_state_dict(state_dict)[source]¶
Loads log scalars, runtime information and resumed keys fromstate_dict
or message_hub
.
If state_dict
is a dictionary returned by state_dict(), it will only make copies of data which should be resumed from the sourcemessage_hub
.
If state_dict
is a message_hub
instance, it will make copies of all data from the source message_hub. We suggest to load data fromdict
rather than a MessageHub
instance.
Parameters:
state_dict (dict or MessageHub) – A dictionary contains keylog_scalars
runtime_info
and resumed_keys
, or a MessageHub instance.
Return type:
None
property log_scalars_: OrderedDict_¶
Get all HistoryBuffer
instances.
Note
Considering the large memory footprint of history buffers in the post-training, get_scalar() will return a reference of history buffer rather than a copy.
Returns:
All HistoryBuffer
instances.
Return type:
OrderedDict
pop_info(key, default=None)[source]¶
Remove runtime information by key. If the key does not exist, this method will return the default value.
Parameters:
- key (str) – Key of runtime information.
- default (Any , optional) – The default returned value for the given key.
Returns:
The runtime information if the key exists.
Return type:
Any
property runtime_info_: OrderedDict_¶
Get all runtime information.
Returns:
A copy of all runtime information.
Return type:
OrderedDict
Returns a dictionary containing log scalars, runtime information and resumed keys, which should be resumed.
The returned state_dict
can be loaded by load_state_dict().
Returns:
A dictionary contains log_scalars
, runtime_info
andresumed_keys
.
Return type:
update_info(key, value, resumed=True)[source]¶
Update runtime information.
The key corresponding runtime information will be overwritten each time calling update_info
.
Note
The resumed
argument needs to be consistent for the samekey
.
Examples
message_hub = MessageHub(name='name') message_hub.update_info('iter', 100)
Parameters:
- key (str) – Key of runtime information.
- value (Any) – Value of runtime information.
- resumed (bool) – Whether the corresponding
HistoryBuffer
could be resumed.
Return type:
None
update_info_dict(info_dict, resumed=True)[source]¶
Update runtime information with dictionary.
The key corresponding runtime information will be overwritten each time calling update_info
.
Note
The resumed
argument needs to be consistent for the sameinfo_dict
.
Examples
message_hub = MessageHub(name='name') message_hub.update_info({'iter': 100})
Parameters:
- info_dict (str) – Runtime information dictionary.
- resumed (bool) – Whether the corresponding
HistoryBuffer
could be resumed.
Return type:
None
update_scalar(key, value, count=1, resumed=True)[source]¶
Update :attr:_log_scalars.
Update HistoryBuffer
in _log_scalars
. If corresponding keyHistoryBuffer
has been created, value
and count
is the argument of HistoryBuffer.update
, Otherwise, update_scalar
will create an HistoryBuffer
with value and count via the constructor of HistoryBuffer
.
Examples
message_hub = MessageHub(name='name')
create loss
HistoryBuffer
with value=1, count=1message_hub.update_scalar('loss', 1)
update loss
HistoryBuffer
with valuemessage_hub.update_scalar('loss', 3) message_hub.update_scalar('loss', 3, resumed=False) AssertionError: loss used to be true, but got false now. resumed keys cannot be modified repeatedly'
Note
The resumed
argument needs to be consistent for the samekey
.
Parameters:
- key (str) – Key of
HistoryBuffer
. - value (torch.Tensor or np.ndarray or int or float) – Value of log.
- count (torch.Tensor or np.ndarray or int or float) – Accumulation times of log, defaults to 1. count will be used in smooth statistics.
- resumed (str) – Whether the corresponding
HistoryBuffer
could be resumed. Defaults to True.
Return type:
None
update_scalars(log_dict, resumed=True)[source]¶
Update _log_scalars
with a dict.
update_scalars
iterates through each pair of log_dict key-value, and calls update_scalar
. If type of value is dict, the value should be dict(value=xxx) or dict(value=xxx, count=xxx)
. Item inlog_dict
has the same resume option.
Note
The resumed
argument needs to be consistent for the samelog_dict
.
Parameters:
- log_dict (str) – Used for batch updating
_log_scalars
. - resumed (bool) – Whether all
HistoryBuffer
referred in log_dict should be resumed. Defaults to True.
Return type:
None
Examples
message_hub = MessageHub.get_instance('mmengine') log_dict = dict(a=1, b=2, c=3) message_hub.update_scalars(log_dict)
The default count of
a
,b
andc
is 1.log_dict = dict(a=1, b=2, c=dict(value=1, count=2)) message_hub.update_scalars(log_dict)
The count of
c
is 2.