LambdaLR (original) (raw)
class torch.optim.lr_scheduler.LambdaLR(optimizer, lr_lambda, last_epoch=-1)[source]#
Sets the initial learning rate.
The learning rate of each parameter group is set to the initial lr times a given function. When last_epoch=-1, sets initial lr as lr.
Parameters:
- optimizer (Optimizer) – Wrapped optimizer.
- lr_lambda (function or list) – A function which computes a multiplicative factor given an integer parameter epoch, or a list of such functions, one for each group in optimizer.param_groups.
- last_epoch (int) – The index of last epoch. Default: -1.
Example
Assuming optimizer has two groups.
num_epochs = 100 lambda1 = lambda epoch: epoch // 30 lambda2 = lambda epoch: 0.95**epoch scheduler = LambdaLR(optimizer, lr_lambda=[lambda1, lambda2]) for epoch in range(num_epochs): train(...) validate(...) scheduler.step()
Alternatively, you can use a single lambda function for all groups.
scheduler = LambdaLR(opt, lr_lambda=lambda epoch: epoch // 30) for epoch in range(num_epochs): train(...) validate(...) scheduler.step()
Get the most recent learning rates computed by this scheduler.
Returns:
A list of learning rates with entries for each of the optimizer’sparam_groups, with the same types as their group["lr"]s.
Return type:
Note
The returned Tensors are copies, and never alias the optimizer’s group["lr"]s.
Compute the next learning rate for each of the optimizer’sparam_groups.
Scales the base_lrs by the outputs of the lr_lambdas atlast_epoch.
Returns:
A list of learning rates for each of the optimizer’s param_groups with the same types as their current group["lr"]s.
Return type:
Note
If you’re trying to inspect the most recent learning rate, useget_last_lr() instead.
Note
The returned Tensors are copies, and never alias the optimizer’s group["lr"]s.
load_state_dict(state_dict)[source]#
Load the scheduler’s state.
When saving or loading the scheduler, please make sure to also save or load the state of the optimizer.
Parameters:
state_dict (dict) – scheduler state. Should be an object returned from a call to state_dict().
Return the state of the scheduler as a dict.
It contains an entry for every variable in self.__dict__ which is not the optimizer. The learning rate lambda functions will only be saved if they are callable objects and not if they are functions or lambdas.
When saving or loading the scheduler, please make sure to also save or load the state of the optimizer.
Return type:
Step the scheduler.
Parameters:
epoch (int, optional) –
Deprecated since version 1.4: If provided, sets last_epoch to epoch and uses_get_closed_form_lr() if it is available. This is not universally supported. Use step() without arguments instead.
Note
Call this method after calling the optimizer’sstep().