NAdam — PyTorch 2.0 documentation (original) (raw)

class torch.optim.NAdam(params, lr=0.002, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, momentum_decay=0.004, *, foreach=None, differentiable=False)[source]

Implements NAdam algorithm.

input:γt (lr), β1,β2 (betas), θ0 (params), f(θ) (objective) λ (weight decay), ψ (momentum decay)initialize:m0←0 ( first moment),v0←0 ( second moment)for t=1 to … dogt←∇θft(θt−1)if λ≠0gt←gt+λθt−1μt←β1(1−120.96tψ)μt+1←β1(1−120.96(t+1)ψ)mt←β1mt−1+(1−β1)gtvt←β2vt−1+(1−β2)gt2mt^←μt+1mt/(1−∏i=1t+1μi)+(1−μt)gt/(1−∏i=1tμi)vt^←vt/(1−β2t)θt←θt−1−γmt^/(vt^+ϵ)return θt\begin{aligned} &\rule{110mm}{0.4pt} \\ &\textbf{input} : \gamma_t \text{ (lr)}, \: \beta_1,\beta_2 \text{ (betas)}, \: \theta_0 \text{ (params)}, \: f(\theta) \text{ (objective)} \\ &\hspace{13mm} \: \lambda \text{ (weight decay)}, \:\psi \text{ (momentum decay)} \\ &\textbf{initialize} : m_0 \leftarrow 0 \text{ ( first moment)}, v_0 \leftarrow 0 \text{ ( second moment)} \\[-1.ex] &\rule{110mm}{0.4pt} \\ &\textbf{for} \: t=1 \: \textbf{to} \: \ldots \: \textbf{do} \\ &\hspace{5mm}g_t \leftarrow \nabla_{\theta} f_t (\theta_{t-1}) \\ &\hspace{5mm}if \: \lambda \neq 0 \\ &\hspace{10mm} g_t \leftarrow g_t + \lambda \theta_{t-1} \\ &\hspace{5mm} \mu_t \leftarrow \beta_1 \big(1 - \frac{1}{2} 0.96^{t \psi} \big) \\ &\hspace{5mm} \mu_{t+1} \leftarrow \beta_1 \big(1 - \frac{1}{2} 0.96^{(t+1)\psi}\big)\\ &\hspace{5mm}m_t \leftarrow \beta_1 m_{t-1} + (1 - \beta_1) g_t \\ &\hspace{5mm}v_t \leftarrow \beta_2 v_{t-1} + (1-\beta_2) g^2_t \\ &\hspace{5mm}\widehat{m_t} \leftarrow \mu_{t+1} m_t/(1-\prod_{i=1}^{t+1}\mu_i)\\[-1.ex] & \hspace{11mm} + (1-\mu_t) g_t /(1-\prod_{i=1}^{t} \mu_{i}) \\ &\hspace{5mm}\widehat{v_t} \leftarrow v_t/\big(1-\beta_2^t \big) \\ &\hspace{5mm}\theta_t \leftarrow \theta_{t-1} - \gamma \widehat{m_t}/ \big(\sqrt{\widehat{v_t}} + \epsilon \big) \\ &\rule{110mm}{0.4pt} \\[-1.ex] &\bf{return} \: \theta_t \\[-1.ex] &\rule{110mm}{0.4pt} \\[-1.ex] \end{aligned}

For further details regarding the algorithm we refer to Incorporating Nesterov Momentum into Adam.

Parameters:

add_param_group(param_group)

Add a param group to the Optimizer s param_groups.

This can be useful when fine tuning a pre-trained network as frozen layers can be made trainable and added to the Optimizer as training progresses.

Parameters:

param_group (dict) – Specifies what Tensors should be optimized along with group specific optimization options.

load_state_dict(state_dict)

Loads the optimizer state.

Parameters:

state_dict (dict) – optimizer state. Should be an object returned from a call to state_dict().

register_step_post_hook(hook)

Register an optimizer step post hook which will be called after optimizer step. It should have the following signature:

hook(optimizer, args, kwargs) -> None

The optimizer argument is the optimizer instance being used.

Parameters:

hook (Callable) – The user defined hook to be registered.

Returns:

a handle that can be used to remove the added hook by callinghandle.remove()

Return type:

torch.utils.hooks.RemoveableHandle

register_step_pre_hook(hook)

Register an optimizer step pre hook which will be called before optimizer step. It should have the following signature:

hook(optimizer, args, kwargs) -> None or modified args and kwargs

The optimizer argument is the optimizer instance being used. If args and kwargs are modified by the pre-hook, then the transformed values are returned as a tuple containing the new_args and new_kwargs.

Parameters:

hook (Callable) – The user defined hook to be registered.

Returns:

a handle that can be used to remove the added hook by callinghandle.remove()

Return type:

torch.utils.hooks.RemoveableHandle

state_dict()

Returns the state of the optimizer as a dict.

It contains two entries:

zero_grad(set_to_none=True)

Sets the gradients of all optimized torch.Tensor s to zero.

Parameters:

set_to_none (bool) – instead of setting to zero, set the grads to None. This will in general have lower memory footprint, and can modestly improve performance. However, it changes certain behaviors. For example: 1. When the user tries to access a gradient and perform manual ops on it, a None attribute or a Tensor full of 0s will behave differently. 2. If the user requests zero_grad(set_to_none=True) followed by a backward pass, .grads are guaranteed to be None for params that did not receive a gradient. 3. torch.optim optimizers have a different behavior if the gradient is 0 or None (in one case it does the step with a gradient of 0 and in the other it skips the step altogether).