XLAPrecision — PyTorch Lightning 2.5.1.post0 documentation (original) (raw)
class lightning.pytorch.plugins.precision.XLAPrecision(precision='32-true')[source]¶
Bases: Precision
Plugin for training with XLA.
Parameters:
precision¶ (Literal['32-true'
, '16-true'
, 'bf16-true'
]) – Full precision (32-true) or half precision (16-true, bf16-true).
Raises:
ValueError – If unsupported precision
is provided.
optimizer_step(optimizer, model, closure, **kwargs)[source]¶
Hook to run the optimizer step.
Return type:
This method is called to teardown the training process.
It is the right place to release memory and free other resources.
Return type: