DoublePrecision — PyTorch Lightning 2.5.1.post0 documentation (original) (raw)
class lightning.pytorch.plugins.precision.DoublePrecision[source]¶
Bases: Precision
Plugin for training with double (torch.float64
) precision.
Convert model inputs (forward) to the floating point precision type of this plugin.
This is a no-op in the base precision plugin, since we assume the data already has the desired type (default is torch.float32).
Return type:
convert_module(module)[source]¶
Convert the module parameters to the precision type this plugin handles.
This is optional and depends on the precision limitations during optimization.
Return type:
A context manager to change the default tensor type.
See: torch.set_default_dtype()
Return type:
module_init_context()[source]¶
Instantiate module parameters or tensors in the precision type this plugin handles.
This is optional and depends on the precision limitations during optimization.
Return type:
tensor_init_context()[source]¶
Controls how tensors get created (device, dtype).
Return type: