SequentialLR (original) (raw)

class torch.optim.lr_scheduler.SequentialLR(optimizer, schedulers, milestones, last_epoch=-1)[source]#

Contains a list of schedulers expected to be called sequentially during the optimization process.

Specifically, the schedulers will be called according to the milestone points, which should provide exact intervals by which each scheduler should be called at a given epoch.

Parameters

Example

Assuming optimizer uses lr = 0.05 for all groups

lr = 0.005 if epoch == 0

lr = 0.005 if epoch == 1

lr = 0.005 if epoch == 2

...

lr = 0.05 if epoch == 20

lr = 0.045 if epoch == 21

lr = 0.0405 if epoch == 22

scheduler1 = ConstantLR(optimizer, factor=0.1, total_iters=20) scheduler2 = ExponentialLR(optimizer, gamma=0.9) scheduler = SequentialLR( ... optimizer, ... schedulers=[scheduler1, scheduler2], ... milestones=[20], ... ) for epoch in range(100): train(...) validate(...) scheduler.step()

../_images/SequentialLR.png

get_last_lr()[source]#

Return last computed learning rate by current scheduler.

Return type

list[float]

get_lr()[source]#

Compute learning rate using chainable form of the scheduler.

Return type

list[float]

load_state_dict(state_dict)[source]#

Load the scheduler’s state.

Parameters

state_dict (dict) – scheduler state. Should be an object returned from a call to state_dict().

recursive_undo(sched=None)[source]#

Recursively undo any step performed by the initialisation of schedulers.

state_dict()[source]#

Return the state of the scheduler as a dict.

It contains an entry for every variable in self.__dict__ which is not the optimizer. The wrapped scheduler states will also be saved.

Return type

dict[str, Any]

step()[source]#

Perform a step.