Precision — PyTorch Lightning 2.5.1.post0 documentation (original) (raw)
class lightning.pytorch.plugins.precision.Precision[source]¶
Bases: Precision, CheckpointHooks
Base class for all plugins handling the precision-specific parts of the training.
The class attribute precision must be overwritten in child classes. The default value reflects fp32 training.
backward(tensor, model, optimizer, *args, **kwargs)[source]¶
Performs the actual backpropagation.
Parameters:
- tensor¶ (Tensor) – the loss value obtained from the closure
- model¶ (LightningModule) – the model to be optimized
- optimizer¶ (Optional[
Steppable
]) – current optimizer being used.None
if using manual optimization - *args¶ (Any) – Positional arguments intended for the actual function that performs the backward, likebackward().
- **kwargs¶ (Any) – Keyword arguments for the same purpose as
*args
.
Return type:
clip_grad_by_norm(optimizer, clip_val)[source]¶
Clip gradients by norm.
Return type:
clip_grad_by_value(optimizer, clip_val)[source]¶
Clip gradients by value.
Return type:
clip_gradients(optimizer, clip_val=0.0, gradient_clip_algorithm=GradClipAlgorithmType.NORM)[source]¶
Clips the gradients.
Return type:
connect(model, optimizers, lr_schedulers)[source]¶
Connects this plugin to the accelerator and the training process.
Return type:
tuple[Module, list[Optimizer], list[Any]]
optimizer_step(optimizer, model, closure, **kwargs)[source]¶
Hook to run the optimizer step.
Return type:
post_backward(tensor, module)[source]¶
Runs after precision plugin executes backward.
Parameters:
- tensor¶ (Tensor) – The tensor that will be used for backpropagation
- module¶ (LightningModule) – The module that was involved in producing the tensor and whose parameters need the gradients
Return type:
pre_backward(tensor, module)[source]¶
Runs before precision plugin executes backward.
Parameters:
- tensor¶ (Tensor) – The tensor that will be used for backpropagation
- module¶ (LightningModule) – The module that was involved in producing the tensor and whose parameters need the gradients
Return type:
predict_step_context()[source]¶
A contextmanager for the predict step.
Return type:
A contextmanager for the test step.
Return type:
A contextmanager for the training step.
Return type:
A contextmanager for the validation step.
Return type: