LBFGS — PyTorch 2.7 documentation (original) (raw)
class torch.optim.LBFGS(params, lr=1, max_iter=20, max_eval=None, tolerance_grad=1e-07, tolerance_change=1e-09, history_size=100, line_search_fn=None)[source][source]¶
Implements L-BFGS algorithm.
Heavily inspired by minFunc.
Warning
This optimizer doesn’t support per-parameter options and parameter groups (there can be only one).
Warning
Right now all parameters have to be on a single device. This will be improved in the future.
Note
This is a very memory intensive optimizer (it requires additionalparam_bytes * (history_size + 1)
bytes). If it doesn’t fit in memory try reducing the history size, or use a different algorithm.
Parameters
- params (iterable) – iterable of parameters to optimize. Parameters must be real.
- lr (float, optional) – learning rate (default: 1)
- max_iter (int, optional) – maximal number of iterations per optimization step (default: 20)
- max_eval (int, optional) – maximal number of function evaluations per optimization step (default: max_iter * 1.25).
- tolerance_grad (float, optional) – termination tolerance on first order optimality (default: 1e-7).
- tolerance_change (float, optional) – termination tolerance on function value/parameter changes (default: 1e-9).
- history_size (int, optional) – update history size (default: 100).
- line_search_fn (str, optional) – either ‘strong_wolfe’ or None (default: None).
add_param_group(param_group)[source]¶
Add a param group to the Optimizer s param_groups.
This can be useful when fine tuning a pre-trained network as frozen layers can be made trainable and added to the Optimizer as training progresses.
Parameters
param_group (dict) – Specifies what Tensors should be optimized along with group specific optimization options.
load_state_dict(state_dict)[source]¶
Load the optimizer state.
Parameters
state_dict (dict) – optimizer state. Should be an object returned from a call to state_dict().
Note
The names of the parameters (if they exist under the “param_names” key of each param group in state_dict()) will not affect the loading process. To use the parameters’ names for custom cases (such as when the parameters in the loaded state dict differ from those initialized in the optimizer), a custom register_load_state_dict_pre_hook
should be implemented to adapt the loaded dict accordingly. If param_names
exist in loaded state dict param_groups
they will be saved and override the current names, if present, in the optimizer state. If they do not exist in loaded state dict, the optimizer param_names
will remain unchanged.
register_load_state_dict_post_hook(hook, prepend=False)[source]¶
Register a load_state_dict post-hook which will be called afterload_state_dict() is called. It should have the following signature:
The optimizer
argument is the optimizer instance being used.
The hook will be called with argument self
after callingload_state_dict
on self
. The registered hook can be used to perform post-processing after load_state_dict
has loaded thestate_dict
.
Parameters
- hook (Callable) – The user defined hook to be registered.
- prepend (bool) – If True, the provided post
hook
will be fired before all the already registered post-hooks onload_state_dict
. Otherwise, the providedhook
will be fired after all the already registered post-hooks. (default: False)
Returns
a handle that can be used to remove the added hook by callinghandle.remove()
Return type
torch.utils.hooks.RemoveableHandle
register_load_state_dict_pre_hook(hook, prepend=False)[source]¶
Register a load_state_dict pre-hook which will be called beforeload_state_dict() is called. It should have the following signature:
hook(optimizer, state_dict) -> state_dict or None
The optimizer
argument is the optimizer instance being used and thestate_dict
argument is a shallow copy of the state_dict
the user passed in to load_state_dict
. The hook may modify the state_dict inplace or optionally return a new one. If a state_dict is returned, it will be used to be loaded into the optimizer.
The hook will be called with argument self
and state_dict
before calling load_state_dict
on self
. The registered hook can be used to perform pre-processing before the load_state_dict
call is made.
Parameters
- hook (Callable) – The user defined hook to be registered.
- prepend (bool) – If True, the provided pre
hook
will be fired before all the already registered pre-hooks onload_state_dict
. Otherwise, the providedhook
will be fired after all the already registered pre-hooks. (default: False)
Returns
a handle that can be used to remove the added hook by callinghandle.remove()
Return type
torch.utils.hooks.RemoveableHandle
register_state_dict_post_hook(hook, prepend=False)[source]¶
Register a state dict post-hook which will be called after state_dict() is called.
It should have the following signature:
hook(optimizer, state_dict) -> state_dict or None
The hook will be called with arguments self
and state_dict
after generating a state_dict
on self
. The hook may modify the state_dict inplace or optionally return a new one. The registered hook can be used to perform post-processing on the state_dict
before it is returned.
Parameters
- hook (Callable) – The user defined hook to be registered.
- prepend (bool) – If True, the provided post
hook
will be fired before all the already registered post-hooks onstate_dict
. Otherwise, the providedhook
will be fired after all the already registered post-hooks. (default: False)
Returns
a handle that can be used to remove the added hook by callinghandle.remove()
Return type
torch.utils.hooks.RemoveableHandle
register_state_dict_pre_hook(hook, prepend=False)[source]¶
Register a state dict pre-hook which will be called before state_dict() is called.
It should have the following signature:
The optimizer
argument is the optimizer instance being used. The hook will be called with argument self
before calling state_dict
on self
. The registered hook can be used to perform pre-processing before the state_dict
call is made.
Parameters
- hook (Callable) – The user defined hook to be registered.
- prepend (bool) – If True, the provided pre
hook
will be fired before all the already registered pre-hooks onstate_dict
. Otherwise, the providedhook
will be fired after all the already registered pre-hooks. (default: False)
Returns
a handle that can be used to remove the added hook by callinghandle.remove()
Return type
torch.utils.hooks.RemoveableHandle
register_step_post_hook(hook)[source]¶
Register an optimizer step post hook which will be called after optimizer step.
It should have the following signature:
hook(optimizer, args, kwargs) -> None
The optimizer
argument is the optimizer instance being used.
Parameters
hook (Callable) – The user defined hook to be registered.
Returns
a handle that can be used to remove the added hook by callinghandle.remove()
Return type
torch.utils.hooks.RemovableHandle
register_step_pre_hook(hook)[source]¶
Register an optimizer step pre hook which will be called before optimizer step.
It should have the following signature:
hook(optimizer, args, kwargs) -> None or modified args and kwargs
The optimizer
argument is the optimizer instance being used. If args and kwargs are modified by the pre-hook, then the transformed values are returned as a tuple containing the new_args and new_kwargs.
Parameters
hook (Callable) – The user defined hook to be registered.
Returns
a handle that can be used to remove the added hook by callinghandle.remove()
Return type
torch.utils.hooks.RemovableHandle
Return the state of the optimizer as a dict.
It contains two entries:
state
: a Dict holding current optimization state. Its content
differs between optimizer classes, but some common characteristics hold. For example, state is saved per parameter, and the parameter itself is NOT saved.state
is a Dictionary mapping parameter ids to a Dict with state corresponding to each parameter.param_groups
: a List containing all parameter groups where each
parameter group is a Dict. Each parameter group contains metadata specific to the optimizer, such as learning rate and weight decay, as well as a List of parameter IDs of the parameters in the group. If a param group was initialized withnamed_parameters()
the names content will also be saved in the state dict.
NOTE: The parameter IDs may look like indices but they are just IDs associating state with param_group. When loading from a state_dict, the optimizer will zip the param_group params
(int IDs) and the optimizer param_groups
(actual nn.Parameter
s) in order to match state WITHOUT additional verification.
A returned state dict might look something like:
{ 'state': { 0: {'momentum_buffer': tensor(...), ...}, 1: {'momentum_buffer': tensor(...), ...}, 2: {'momentum_buffer': tensor(...), ...}, 3: {'momentum_buffer': tensor(...), ...} }, 'param_groups': [ { 'lr': 0.01, 'weight_decay': 0, ... 'params': [0] 'param_names' ['param0'] (optional) }, { 'lr': 0.001, 'weight_decay': 0.5, ... 'params': [1, 2, 3] 'param_names': ['param1', 'layer.weight', 'layer.bias'] (optional) } ] }
Return type
step(closure)[source][source]¶
Perform a single optimization step.
Parameters
closure (Callable) – A closure that reevaluates the model and returns the loss.
zero_grad(set_to_none=True)[source]¶
Reset the gradients of all optimized torch.Tensor s.
Parameters
set_to_none (bool) – instead of setting to zero, set the grads to None. This will in general have lower memory footprint, and can modestly improve performance. However, it changes certain behaviors. For example: 1. When the user tries to access a gradient and perform manual ops on it, a None attribute or a Tensor full of 0s will behave differently. 2. If the user requests zero_grad(set_to_none=True)
followed by a backward pass, .grad
s are guaranteed to be None for params that did not receive a gradient. 3. torch.optim
optimizers have a different behavior if the gradient is 0 or None (in one case it does the step with a gradient of 0 and in the other it skips the step altogether).