CyclicLR — PyTorch 2.7 documentation (original) (raw)

class torch.optim.lr_scheduler.CyclicLR(optimizer, base_lr, max_lr, step_size_up=2000, step_size_down=None, mode='triangular', gamma=1.0, scale_fn=None, scale_mode='cycle', cycle_momentum=True, base_momentum=0.8, max_momentum=0.9, last_epoch=-1)[source][source]

Sets the learning rate of each parameter group according to cyclical learning rate policy (CLR).

The policy cycles the learning rate between two boundaries with a constant frequency, as detailed in the paper Cyclical Learning Rates for Training Neural Networks. The distance between the two boundaries can be scaled on a per-iteration or per-cycle basis.

Cyclical learning rate policy changes the learning rate after every batch.step should be called after a batch has been used for training.

This class has three built-in policies, as put forth in the paper:

This implementation was adapted from the github repo: bckenstler/CLR

Parameters

Example

optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9) scheduler = torch.optim.lr_scheduler.CyclicLR(optimizer, base_lr=0.01, max_lr=0.1) data_loader = torch.utils.data.DataLoader(...) for epoch in range(10): for batch in data_loader: train_batch(...) scheduler.step()

get_last_lr()[source]

Return last computed learning rate by current scheduler.

Return type

list[float]

get_lr()[source][source]

Calculate the learning rate at batch index.

This function treats self.last_epoch as the last batch index.

If self.cycle_momentum is True, this function has a side effect of updating the optimizer’s momentum.

load_state_dict(state_dict)[source][source]

Load the scheduler’s state.

scale_fn(x)[source][source]

Get the scaling policy.

Return type

float

step(epoch=None)[source]

Perform a step.