ASGD — PyTorch 2.0 documentation (original) (raw)
class torch.optim.ASGD(params, lr=0.01, lambd=0.0001, alpha=0.75, t0=1000000.0, weight_decay=0, foreach=None, maximize=False, differentiable=False)[source]¶
Implements Averaged Stochastic Gradient Descent.
It has been proposed in Acceleration of stochastic approximation by averaging.
Parameters:
- params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
- lr (float, optional) – learning rate (default: 1e-2)
- lambd (float, optional) – decay term (default: 1e-4)
- alpha (float, optional) – power for eta update (default: 0.75)
- t0 (float, optional) – point at which to start averaging (default: 1e6)
- weight_decay (float, optional) – weight decay (L2 penalty) (default: 0)
- foreach (bool, optional) – whether foreach implementation of optimizer is used. If unspecified by the user (so foreach is None), we will try to use foreach over the for-loop implementation on CUDA, since it is usually significantly more performant. (default: None)
- maximize (bool, optional) – maximize the params based on the objective, instead of minimizing (default: False)
- differentiable (bool, optional) – whether autograd should occur through the optimizer step in training. Otherwise, the step() function runs in a torch.no_grad() context. Setting to True can impair performance, so leave it False if you don’t intend to run autograd through this instance (default: False)
add_param_group(param_group)¶
Add a param group to the Optimizer s param_groups.
This can be useful when fine tuning a pre-trained network as frozen layers can be made trainable and added to the Optimizer as training progresses.
Parameters:
param_group (dict) – Specifies what Tensors should be optimized along with group specific optimization options.
load_state_dict(state_dict)¶
Loads the optimizer state.
Parameters:
state_dict (dict) – optimizer state. Should be an object returned from a call to state_dict().
register_step_post_hook(hook)¶
Register an optimizer step post hook which will be called after optimizer step. It should have the following signature:
hook(optimizer, args, kwargs) -> None
The optimizer
argument is the optimizer instance being used.
Parameters:
hook (Callable) – The user defined hook to be registered.
Returns:
a handle that can be used to remove the added hook by callinghandle.remove()
Return type:
torch.utils.hooks.RemoveableHandle
register_step_pre_hook(hook)¶
Register an optimizer step pre hook which will be called before optimizer step. It should have the following signature:
hook(optimizer, args, kwargs) -> None or modified args and kwargs
The optimizer
argument is the optimizer instance being used. If args and kwargs are modified by the pre-hook, then the transformed values are returned as a tuple containing the new_args and new_kwargs.
Parameters:
hook (Callable) – The user defined hook to be registered.
Returns:
a handle that can be used to remove the added hook by callinghandle.remove()
Return type:
torch.utils.hooks.RemoveableHandle
state_dict()¶
Returns the state of the optimizer as a dict.
It contains two entries:
- state - a dict holding current optimization state. Its content
differs between optimizer classes. - param_groups - a list containing all parameter groups where each
parameter group is a dict
zero_grad(set_to_none=True)¶
Sets the gradients of all optimized torch.Tensor s to zero.
Parameters:
set_to_none (bool) – instead of setting to zero, set the grads to None. This will in general have lower memory footprint, and can modestly improve performance. However, it changes certain behaviors. For example: 1. When the user tries to access a gradient and perform manual ops on it, a None attribute or a Tensor full of 0s will behave differently. 2. If the user requests zero_grad(set_to_none=True)
followed by a backward pass, .grad
s are guaranteed to be None for params that did not receive a gradient. 3. torch.optim
optimizers have a different behavior if the gradient is 0 or None (in one case it does the step with a gradient of 0 and in the other it skips the step altogether).