Adadelta — PyTorch 2.0 documentation (original) (raw)
class torch.optim.Adadelta(params, lr=1.0, rho=0.9, eps=1e-06, weight_decay=0, foreach=None, *, maximize=False, differentiable=False)[source]¶
Implements Adadelta algorithm.
input:γ (lr), θ0 (params), f(θ) (objective), ρ (decay), λ (weight decay)initialize:v0←0 (square avg), u0←0 (accumulate variables)for t=1 to … dogt←∇θft(θt−1)if λ≠0gt←gt+λθt−1vt←vt−1ρ+gt2(1−ρ)Δxt←ut−1+ϵvt+ϵgtut←ut−1ρ+Δxt2(1−ρ)θt←θt−1−γΔxtreturn θt\begin{aligned} &\rule{110mm}{0.4pt} \\ &\textbf{input} : \gamma \text{ (lr)}, \: \theta_0 \text{ (params)}, \: f(\theta) \text{ (objective)}, \: \rho \text{ (decay)}, \: \lambda \text{ (weight decay)} \\ &\textbf{initialize} : v_0 \leftarrow 0 \: \text{ (square avg)}, \: u_0 \leftarrow 0 \: \text{ (accumulate variables)} \\[-1.ex] &\rule{110mm}{0.4pt} \\ &\textbf{for} \: t=1 \: \textbf{to} \: \ldots \: \textbf{do} \\ &\hspace{5mm}g_t \leftarrow \nabla_{\theta} f_t (\theta_{t-1}) \\ &\hspace{5mm}if \: \lambda \neq 0 \\ &\hspace{10mm} g_t \leftarrow g_t + \lambda \theta_{t-1} \\ &\hspace{5mm} v_t \leftarrow v_{t-1} \rho + g^2_t (1 - \rho) \\ &\hspace{5mm}\Delta x_t \leftarrow \frac{\sqrt{u_{t-1} + \epsilon }}{ \sqrt{v_t + \epsilon} }g_t \hspace{21mm} \\ &\hspace{5mm} u_t \leftarrow u_{t-1} \rho + \Delta x^2_t (1 - \rho) \\ &\hspace{5mm}\theta_t \leftarrow \theta_{t-1} - \gamma \Delta x_t \\ &\rule{110mm}{0.4pt} \\[-1.ex] &\bf{return} \: \theta_t \\[-1.ex] &\rule{110mm}{0.4pt} \\[-1.ex] \end{aligned}
For further details regarding the algorithm we refer to ADADELTA: An Adaptive Learning Rate Method.
Parameters:
- params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
- rho (float, optional) – coefficient used for computing a running average of squared gradients (default: 0.9)
- eps (float, optional) – term added to the denominator to improve numerical stability (default: 1e-6)
- lr (float, optional) – coefficient that scale delta before it is applied to the parameters (default: 1.0)
- weight_decay (float, optional) – weight decay (L2 penalty) (default: 0)
- foreach (bool, optional) – whether foreach implementation of optimizer is used. If unspecified by the user (so foreach is None), we will try to use foreach over the for-loop implementation on CUDA, since it is usually significantly more performant. (default: None)
- maximize (bool, optional) – maximize the params based on the objective, instead of minimizing (default: False)
- differentiable (bool, optional) – whether autograd should occur through the optimizer step in training. Otherwise, the step() function runs in a torch.no_grad() context. Setting to True can impair performance, so leave it False if you don’t intend to run autograd through this instance (default: False)
add_param_group(param_group)¶
Add a param group to the Optimizer s param_groups.
This can be useful when fine tuning a pre-trained network as frozen layers can be made trainable and added to the Optimizer as training progresses.
Parameters:
param_group (dict) – Specifies what Tensors should be optimized along with group specific optimization options.
load_state_dict(state_dict)¶
Loads the optimizer state.
Parameters:
state_dict (dict) – optimizer state. Should be an object returned from a call to state_dict().
register_step_post_hook(hook)¶
Register an optimizer step post hook which will be called after optimizer step. It should have the following signature:
hook(optimizer, args, kwargs) -> None
The optimizer
argument is the optimizer instance being used.
Parameters:
hook (Callable) – The user defined hook to be registered.
Returns:
a handle that can be used to remove the added hook by callinghandle.remove()
Return type:
torch.utils.hooks.RemoveableHandle
register_step_pre_hook(hook)¶
Register an optimizer step pre hook which will be called before optimizer step. It should have the following signature:
hook(optimizer, args, kwargs) -> None or modified args and kwargs
The optimizer
argument is the optimizer instance being used. If args and kwargs are modified by the pre-hook, then the transformed values are returned as a tuple containing the new_args and new_kwargs.
Parameters:
hook (Callable) – The user defined hook to be registered.
Returns:
a handle that can be used to remove the added hook by callinghandle.remove()
Return type:
torch.utils.hooks.RemoveableHandle
state_dict()¶
Returns the state of the optimizer as a dict.
It contains two entries:
- state - a dict holding current optimization state. Its content
differs between optimizer classes. - param_groups - a list containing all parameter groups where each
parameter group is a dict
zero_grad(set_to_none=True)¶
Sets the gradients of all optimized torch.Tensor s to zero.
Parameters:
set_to_none (bool) – instead of setting to zero, set the grads to None. This will in general have lower memory footprint, and can modestly improve performance. However, it changes certain behaviors. For example: 1. When the user tries to access a gradient and perform manual ops on it, a None attribute or a Tensor full of 0s will behave differently. 2. If the user requests zero_grad(set_to_none=True)
followed by a backward pass, .grad
s are guaranteed to be None for params that did not receive a gradient. 3. torch.optim
optimizers have a different behavior if the gradient is 0 or None (in one case it does the step with a gradient of 0 and in the other it skips the step altogether).