DeepSpeedPrecision — PyTorch Lightning 2.6.0 documentation (original) (raw)

class lightning.pytorch.plugins.precision.DeepSpeedPrecision(precision)[source]

Bases: Precision

Precision plugin for DeepSpeed integration.

Parameters:

precision (Literal['32-true', '16-true', 'bf16-true', '16-mixed', 'bf16-mixed']) – Full precision (32-true), half precision (16-true, bf16-true) or mixed precision (16-mixed, bf16-mixed).

Raises:

ValueError – If unsupported precision is provided.

backward(tensor, model, optimizer, *args, **kwargs)[source]

Performs back-propagation using DeepSpeed’s engine.

Parameters:

Return type:

None

clip_gradients(optimizer, clip_val=0.0, gradient_clip_algorithm=GradClipAlgorithmType.NORM)[source]

DeepSpeed handles gradient clipping internally.

Return type:

None

convert_input(data)[source]

Convert model inputs (forward) to the floating point precision type of this plugin.

This is a no-op in the base precision plugin, since we assume the data already has the desired type (default is torch.float32).

Return type:

Any

convert_module(module)[source]

Convert the module parameters to the precision type this plugin handles.

This is optional and depends on the precision limitations during optimization.

Return type:

Module

module_init_context()[source]

Instantiate module parameters or tensors in the precision type this plugin handles.

This is optional and depends on the precision limitations during optimization.

Return type:

AbstractContextManager

optimizer_step(optimizer, model, closure, **kwargs)[source]

Hook to run the optimizer step.

Return type:

Any

tensor_init_context()[source]

Controls how tensors get created (device, dtype).

Return type:

AbstractContextManager