DeepSpeedPrecision — PyTorch Lightning 2.6.0 documentation (original) (raw)
class lightning.pytorch.plugins.precision.DeepSpeedPrecision(precision)[source]¶
Bases: Precision
Precision plugin for DeepSpeed integration.
Parameters:
precision¶ (Literal['32-true', '16-true', 'bf16-true', '16-mixed', 'bf16-mixed']) – Full precision (32-true), half precision (16-true, bf16-true) or mixed precision (16-mixed, bf16-mixed).
Raises:
ValueError – If unsupported precision is provided.
backward(tensor, model, optimizer, *args, **kwargs)[source]¶
Performs back-propagation using DeepSpeed’s engine.
Parameters:
- tensor¶ (Tensor) – the loss tensor
- model¶ (LightningModule) – the model to be optimized
- optimizer¶ (Optional[
Steppable]) – ignored for DeepSpeed - *args¶ (Any) – additional positional arguments for the
deepspeed.DeepSpeedEngine.backward()call - **kwargs¶ (Any) – additional keyword arguments for the
deepspeed.DeepSpeedEngine.backward()call
Return type:
clip_gradients(optimizer, clip_val=0.0, gradient_clip_algorithm=GradClipAlgorithmType.NORM)[source]¶
DeepSpeed handles gradient clipping internally.
Return type:
Convert model inputs (forward) to the floating point precision type of this plugin.
This is a no-op in the base precision plugin, since we assume the data already has the desired type (default is torch.float32).
Return type:
convert_module(module)[source]¶
Convert the module parameters to the precision type this plugin handles.
This is optional and depends on the precision limitations during optimization.
Return type:
module_init_context()[source]¶
Instantiate module parameters or tensors in the precision type this plugin handles.
This is optional and depends on the precision limitations during optimization.
Return type:
optimizer_step(optimizer, model, closure, **kwargs)[source]¶
Hook to run the optimizer step.
Return type:
tensor_init_context()[source]¶
Controls how tensors get created (device, dtype).
Return type: