ReduceLROnPlateau — PyTorch 2.7 documentation (original) (raw)

class torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.1, patience=10, threshold=0.0001, threshold_mode='rel', cooldown=0, min_lr=0, eps=1e-08)[source][source]

Reduce learning rate when a metric has stopped improving.

Models often benefit from reducing the learning rate by a factor of 2-10 once learning stagnates. This scheduler reads a metrics quantity and if no improvement is seen for a ‘patience’ number of epochs, the learning rate is reduced.

Parameters

Example

optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9) scheduler = ReduceLROnPlateau(optimizer, 'min') for epoch in range(10): train(...) val_loss = validate(...) # Note that step should be called after validate() scheduler.step(val_loss)

get_last_lr()[source]

Return last computed learning rate by current scheduler.

Return type

list[float]

get_lr()[source]

Compute learning rate using chainable form of the scheduler.

Return type

list[float]

load_state_dict(state_dict)[source][source]

Load the scheduler’s state.

step(metrics, epoch=None)[source][source]

Perform a step.