CheckpointIO — PyTorch Lightning 2.5.1.post0 documentation (original) (raw)
class lightning.pytorch.plugins.io.CheckpointIO[source]¶
Bases: ABC
Interface to save/load checkpoints as they are saved through the Strategy
.
Typically most plugins either use the Torch based IO Plugin; TorchCheckpointIO
but may require particular handling depending on the plugin.
In addition, you can pass a custom CheckpointIO
by extending this class and passing it to the Trainer, i.e Trainer(plugins=[MyCustomCheckpointIO()])
.
Note
For some plugins, it is not possible to use a custom checkpoint plugin as checkpointing logic is not modifiable.
abstract load_checkpoint(path, map_location=None)[source]¶
Load checkpoint from a path when resuming or loading ckpt for test/validate/predict stages.
Parameters:
- path¶ (Union[str, Path]) – Path to checkpoint
- map_location¶ (Optional[Any]) – a function, torch.device, string or a dict specifying how to remap storage locations.
Return type:
Returns: The loaded checkpoint.
abstract remove_checkpoint(path)[source]¶
Remove checkpoint file from the filesystem.
Parameters:
path¶ (Union[str, Path]) – Path to checkpoint
Return type:
abstract save_checkpoint(checkpoint, path, storage_options=None)[source]¶
Save model/training states as a checkpoint file through state-dump and file-write.
Parameters:
- checkpoint¶ (dict[str, Any]) – dict containing model and trainer state
- path¶ (Union[str, Path]) – write-target path
- storage_options¶ (Optional[Any]) – Optional parameters when saving the model/training states.
Return type:
This method is called to teardown the process.
Return type: