LearningRateMonitor — PyTorch Lightning 2.5.1.post0 documentation (original) (raw)
class lightning.pytorch.callbacks.LearningRateMonitor(logging_interval=None, log_momentum=False, log_weight_decay=False)[source]¶
Bases: Callback
Automatically monitor and logs learning rate for learning rate schedulers during training.
Parameters:
- logging_interval¶ (Optional[Literal[
'step'
,'epoch'
]]) – set to'epoch'
or'step'
to loglr
of all optimizers at the same interval, set toNone
to log at individual interval according to theinterval
key of each scheduler. Defaults toNone
. - log_momentum¶ (bool) – option to also log the momentum values of the optimizer, if the optimizer has the
momentum
orbetas
attribute. Defaults toFalse
. - log_weight_decay¶ (bool) – option to also log the weight decay values of the optimizer. Defaults to
False
.
Raises:
MisconfigurationException – If logging_interval
is none of "step"
, "epoch"
, or None
.
Example:
from lightning.pytorch import Trainer from lightning.pytorch.callbacks import LearningRateMonitor lr_monitor = LearningRateMonitor(logging_interval='step') trainer = Trainer(callbacks=[lr_monitor])
Logging names are automatically determined based on optimizer class name. In case of multiple optimizers of same type, they will be named Adam
,Adam-1
etc. If an optimizer has multiple parameter groups they will be named Adam/pg1
, Adam/pg2
etc. To control naming, pass in aname
keyword in the construction of the learning rate schedulers. A name
keyword can also be used for parameter groups in the construction of the optimizer.
Example:
def configure_optimizer(self): optimizer = torch.optim.Adam(...) lr_scheduler = { 'scheduler': torch.optim.lr_scheduler.LambdaLR(optimizer, ...) 'name': 'my_logging_name' } return [optimizer], [lr_scheduler]
Example:
def configure_optimizer(self): optimizer = torch.optim.SGD( [{ 'params': [p for p in self.parameters()], 'name': 'my_parameter_group_name' }], lr=0.1 ) lr_scheduler = torch.optim.lr_scheduler.LambdaLR(optimizer, ...) return [optimizer], [lr_scheduler]
on_train_batch_start(trainer, *args, **kwargs)[source]¶
Called when the train batch begins.
Return type:
on_train_epoch_start(trainer, *args, **kwargs)[source]¶
Called when the train epoch begins.
Return type:
on_train_start(trainer, *args, **kwargs)[source]¶
Called before training, determines unique names for all lr schedulers in the case of multiple of the same type or in the case of multiple parameter groups.
Raises:
MisconfigurationException – If Trainer
has no logger
.
Return type: