LightningModule — PyTorch Lightning 2.5.1.post0 documentation (original) (raw)
class lightning.pytorch.core.LightningModule(*args, **kwargs)[source]¶
Bases: _DeviceDtypeModuleMixin
, HyperparametersMixin, ModelHooks, DataHooks, CheckpointHooks, Module
all_gather(data, group=None, sync_grads=False)[source]¶
Gather tensors or collections of tensors from multiple processes.
This method needs to be called on all processes and the tensors need to have the same shape across all processes, otherwise your program will stall forever.
Parameters:
- data¶ (Union[Tensor, dict, list, tuple]) – int, float, tensor of shape (batch, …), or a (possibly nested) collection thereof.
- group¶ (Optional[Any]) – the process group to gather results from. Defaults to all processes (world)
- sync_grads¶ (bool) – flag that allows users to synchronize gradients for the all_gather operation
Return type:
Union[Tensor, dict, list, tuple]
Returns:
A tensor of shape (world_size, batch, …), or if the input was a collection the output will also be a collection with tensors of this shape. For the special case where world_size is 1, no additional dimension is added to the tensor(s).
backward(loss, *args, **kwargs)[source]¶
Called to perform backward on the loss returned in training_step(). Override this hook with your own implementation if you need to.
Parameters:
loss¶ (Tensor) – The loss tensor returned by training_step(). If gradient accumulation is used, the loss here holds the normalized value (scaled by 1 / accumulation steps).
Return type:
Example:
def backward(self, loss): loss.backward()
clip_gradients(optimizer, gradient_clip_val=None, gradient_clip_algorithm=None)[source]¶
Handles gradient clipping internally.
Note
- Do not override this method. If you want to customize gradient clipping, consider usingconfigure_gradient_clipping() method.
- For manual optimization (
self.automatic_optimization = False
), if you want to use gradient clipping, consider callingself.clip_gradients(opt, gradient_clip_val=0.5, gradient_clip_algorithm="norm")
manually in the training step.
Parameters:
- optimizer¶ (Optimizer) – Current optimizer being used.
- gradient_clip_val¶ (Union[int, float, None]) – The value at which to clip gradients.
- gradient_clip_algorithm¶ (Optional[str]) – The gradient clipping algorithm to use. Pass
gradient_clip_algorithm="value"
to clip by value, andgradient_clip_algorithm="norm"
to clip by norm.
Return type:
configure_callbacks()[source]¶
Configure model-specific callbacks. When the model gets attached, e.g., when .fit()
or .test()
gets called, the list or a callback returned here will be merged with the list of callbacks passed to the Trainer’scallbacks
argument. If a callback returned here has the same type as one or several callbacks already present in the Trainer’s callbacks list, it will take priority and replace them. In addition, Lightning will make sure ModelCheckpoint callbacks run last.
Return type:
Union[Sequence[Callback], Callback]
Returns:
A callback or a list of callbacks which will extend the list of callbacks in the Trainer.
Example:
def configure_callbacks(self): early_stop = EarlyStopping(monitor="val_acc", mode="max") checkpoint = ModelCheckpoint(monitor="val_loss") return [early_stop, checkpoint]
configure_gradient_clipping(optimizer, gradient_clip_val=None, gradient_clip_algorithm=None)[source]¶
Perform gradient clipping for the optimizer parameters. Called before optimizer_step().
Parameters:
- optimizer¶ (Optimizer) – Current optimizer being used.
- gradient_clip_val¶ (Union[int, float, None]) – The value at which to clip gradients. By default, value passed in Trainer will be available here.
- gradient_clip_algorithm¶ (Optional[str]) – The gradient clipping algorithm to use. By default, value passed in Trainer will be available here.
Return type:
Example:
def configure_gradient_clipping(self, optimizer, gradient_clip_val, gradient_clip_algorithm):
# Implement your own custom logic to clip gradients
# You can call self.clip_gradients
with your settings:
self.clip_gradients(
optimizer,
gradient_clip_val=gradient_clip_val,
gradient_clip_algorithm=gradient_clip_algorithm
)
configure_optimizers()[source]¶
Choose what optimizers and learning-rate schedulers to use in your optimization. Normally you’d need one. But in the case of GANs or similar you might have multiple. Optimization with multiple optimizers only works in the manual optimization mode.
Return type:
Union[Optimizer, Sequence[Optimizer], tuple[Sequence[Optimizer], Sequence[Union[LRScheduler, ReduceLROnPlateau, LRSchedulerConfig
]]], OptimizerConfig
, OptimizerLRSchedulerConfig
, Sequence[OptimizerConfig
], Sequence[OptimizerLRSchedulerConfig
], None]
Returns:
Any of these 6 options.
- Single optimizer.
- List or Tuple of optimizers.
- Two lists - The first list has multiple optimizers, and the second has multiple LR schedulers (or multiple
lr_scheduler_config
). - Dictionary, with an
"optimizer"
key, and (optionally) a"lr_scheduler"
key whose value is a single LR scheduler orlr_scheduler_config
. - None - Fit will run without any optimizer.
The lr_scheduler_config
is a dictionary which contains the scheduler and its associated configuration. The default configuration is shown below.
lr_scheduler_config = {
# REQUIRED: The scheduler instance
"scheduler": lr_scheduler,
# The unit of the scheduler's step size, could also be 'step'.
# 'epoch' updates the scheduler on epoch end whereas 'step'
# updates it after a optimizer update.
"interval": "epoch",
# How many epochs/steps should pass between calls to
# scheduler.step()
. 1 corresponds to updating the learning
# rate after every epoch/step.
"frequency": 1,
# Metric to monitor for schedulers like ReduceLROnPlateau
"monitor": "val_loss",
# If set to True
, will enforce that the value specified 'monitor'
# is available when the scheduler is updated, thus stopping
# training if not found. If set to False
, it will only produce a warning
"strict": True,
# If using the LearningRateMonitor
callback to monitor the
# learning rate progress, this keyword can be used to specify
# a custom logged name
"name": None,
}
When there are schedulers in which the .step()
method is conditioned on a value, such as thetorch.optim.lr_scheduler.ReduceLROnPlateau scheduler, Lightning requires that thelr_scheduler_config
contains the keyword "monitor"
set to the metric name that the scheduler should be conditioned on.
The ReduceLROnPlateau scheduler requires a monitor
def configure_optimizers(self): optimizer = Adam(...) return { "optimizer": optimizer, "lr_scheduler": { "scheduler": ReduceLROnPlateau(optimizer, ...), "monitor": "metric_to_track", "frequency": "indicates how often the metric is updated", # If "monitor" references validation metrics, then "frequency" should be set to a # multiple of "trainer.check_val_every_n_epoch". }, }
In the case of two optimizers, only one using the ReduceLROnPlateau scheduler
def configure_optimizers(self): optimizer1 = Adam(...) optimizer2 = SGD(...) scheduler1 = ReduceLROnPlateau(optimizer1, ...) scheduler2 = LambdaLR(optimizer2, ...) return ( { "optimizer": optimizer1, "lr_scheduler": { "scheduler": scheduler1, "monitor": "metric_to_track", }, }, {"optimizer": optimizer2, "lr_scheduler": scheduler2}, )
Metrics can be made available to monitor by simply logging it usingself.log('metric_to_track', metric_val)
in your LightningModule.
Note
Some things to know:
- Lightning calls
.backward()
and.step()
automatically in case of automatic optimization. - If a learning rate scheduler is specified in
configure_optimizers()
with key"interval"
(default “epoch”) in the scheduler configuration, Lightning will call the scheduler’s.step()
method automatically in case of automatic optimization. - If you use 16-bit precision (
precision=16
), Lightning will automatically handle the optimizer. - If you use torch.optim.LBFGS, Lightning handles the closure function automatically for you.
- If you use multiple optimizers, you will have to switch to ‘manual optimization’ mode and step them yourself.
- If you need to control how often the optimizer steps, override the optimizer_step() hook.
forward(*args, **kwargs)[source]¶
Same as torch.nn.Module.forward().
Parameters:
- *args¶ (Any) – Whatever you decide to pass into the forward method.
- **kwargs¶ (Any) – Keyword arguments are also possible.
Return type:
Returns:
Your model’s output
Freeze all params for inference.
Example:
model = MyLightningModule(...) model.freeze()
Return type:
load_from_checkpoint(checkpoint_path, map_location=None, hparams_file=None, strict=None, **kwargs)[source]¶
Primary way of loading a model from a checkpoint. When Lightning saves a checkpoint it stores the arguments passed to __init__
in the checkpoint under "hyper_parameters"
.
Any arguments specified through **kwargs will override args stored in "hyper_parameters"
.
Parameters:
- checkpoint_path¶ (Union[str, Path, IO]) – Path to checkpoint. This can also be a URL, or file-like object
- map_location¶ (Union[device, str, int, Callable[[UntypedStorage, str], Optional[UntypedStorage]], dict[Union[device, str, int], Union[device, str, int]], None]) – If your checkpoint saved a GPU model and you now load on CPUs or a different number of GPUs, use this to map to the new setup. The behaviour is the same as in torch.load().
- hparams_file¶ (Union[str, Path, None]) –
Optional path to a.yaml
or.csv
file with hierarchical structure as in this example:
drop_prob: 0.2
dataloader:
batch_size: 32
You most likely won’t need this since Lightning will always save the hyperparameters to the checkpoint. However, if your checkpoint weights don’t have the hyperparameters saved, use this method to pass in a.yaml
file with the hparams you’d like to use. These will be converted into a dict and passed into yourLightningModule for use.
If your model’shparams
argument is Namespaceand.yaml
file has hierarchical structure, you need to refactor your model to treathparams
as dict. - strict¶ (Optional[bool]) – Whether to strictly enforce that the keys in
checkpoint_path
match the keys returned by this module’s state dict. Defaults toTrue
unlessLightningModule.strict_loading
is set, in which case it defaults to the value ofLightningModule.strict_loading
. - **kwargs¶ (Any) – Any extra keyword args needed to init the model. Can also be used to override saved hyperparameter values.
Return type:
Self
Returns:
LightningModule instance with loaded weights and hyperparameters (if available).
Note
load_from_checkpoint
is a class method. You should use your LightningModule class to call it instead of the LightningModule instance, or aTypeError
will be raised.
Note
To ensure all layers can be loaded from the checkpoint, this function will callconfigure_model() directly after instantiating the model if this hook is overridden in your LightningModule. However, note that load_from_checkpoint
does not support loading sharded checkpoints, and you may run out of memory if the model is too large. In this case, consider loading through the Trainer via .fit(ckpt_path=...)
.
Example:
load weights without mapping ...
model = MyLightningModule.load_from_checkpoint('path/to/checkpoint.ckpt')
or load weights mapping all weights from GPU 1 to GPU 0 ...
map_location = {'cuda:1':'cuda:0'} model = MyLightningModule.load_from_checkpoint( 'path/to/checkpoint.ckpt', map_location=map_location )
or load weights and hyperparameters from separate files.
model = MyLightningModule.load_from_checkpoint( 'path/to/checkpoint.ckpt', hparams_file='/path/to/hparams_file.yaml' )
override some of the params with new values
model = MyLightningModule.load_from_checkpoint( PATH, num_layers=128, pretrained_ckpt_path=NEW_PATH, )
predict
pretrained_model.eval() pretrained_model.freeze() y_hat = pretrained_model(x)
log(name, value, prog_bar=False, logger=None, on_step=None, on_epoch=None, reduce_fx='mean', enable_graph=False, sync_dist=False, sync_dist_group=None, add_dataloader_idx=True, batch_size=None, metric_attribute=None, rank_zero_only=False)[source]¶
Log a key, value pair.
Example:
self.log('train_loss', loss)
The default behavior per hook is documented here: Automatic Logging.
Parameters:
- name¶ (str) – key to log. Must be identical across all processes if using DDP or any other distributed strategy.
- value¶ (Union[Metric, Tensor, int, float]) – value to log. Can be a
float
,Tensor
, or aMetric
. - prog_bar¶ (bool) – if
True
logs to the progress bar. - logger¶ (Optional[bool]) – if
True
logs to the logger. - on_step¶ (Optional[bool]) – if
True
logs at this step. The default value is determined by the hook. See Automatic Logging for details. - on_epoch¶ (Optional[bool]) – if
True
logs epoch accumulated metrics. The default value is determined by the hook. See Automatic Logging for details. - reduce_fx¶ (Union[str, Callable]) – reduction function over step values for end of epoch.
torch.mean()
by default. - enable_graph¶ (bool) – if
True
, will not auto detach the graph. - sync_dist¶ (bool) – if
True
, reduces the metric across devices. Use with care as this may lead to a significant communication overhead. - sync_dist_group¶ (Optional[Any]) – the DDP group to sync across.
- add_dataloader_idx¶ (bool) – if
True
, appends the index of the current dataloader to the name (when using multiple dataloaders). If False, user needs to give unique names for each dataloader to not mix the values. - batch_size¶ (Optional[int]) – Current batch_size. This will be directly inferred from the loaded batch, but for some data structures you might need to explicitly provide it.
- metric_attribute¶ (Optional[str]) – To restore the metric state, Lightning requires the reference of thetorchmetrics.Metric in your model. This is found automatically if it is a model attribute.
- rank_zero_only¶ (bool) – Tells Lightning if you are calling
self.log
from every process (default) or only from rank 0. IfTrue
, you won’t be able to use this metric as a monitor in callbacks (e.g., early stopping). Warning: Improper use can lead to deadlocks! SeeAdvanced Logging for more details.
Return type:
log_dict(dictionary, prog_bar=False, logger=None, on_step=None, on_epoch=None, reduce_fx='mean', enable_graph=False, sync_dist=False, sync_dist_group=None, add_dataloader_idx=True, batch_size=None, rank_zero_only=False)[source]¶
Log a dictionary of values at once.
Example:
values = {'loss': loss, 'acc': acc, ..., 'metric_n': metric_n} self.log_dict(values)
Parameters:
- dictionary¶ (Union[Mapping[str, Union[Metric, Tensor, int, float]], MetricCollection]) – key value pairs. Keys must be identical across all processes if using DDP or any other distributed strategy. The values can be a
float
,Tensor
,Metric
, orMetricCollection
. - prog_bar¶ (bool) – if
True
logs to the progress base. - logger¶ (Optional[bool]) – if
True
logs to the logger. - on_step¶ (Optional[bool]) – if
True
logs at this step.None
auto-logs for training_step but not validation/test_step. The default value is determined by the hook. See Automatic Logging for details. - on_epoch¶ (Optional[bool]) – if
True
logs epoch accumulated metrics.None
auto-logs for val/test step but nottraining_step
. The default value is determined by the hook. See Automatic Logging for details. - reduce_fx¶ (Union[str, Callable]) – reduction function over step values for end of epoch.
torch.mean()
by default. - enable_graph¶ (bool) – if
True
, will not auto-detach the graph - sync_dist¶ (bool) – if
True
, reduces the metric across GPUs/TPUs. Use with care as this may lead to a significant communication overhead. - sync_dist_group¶ (Optional[Any]) – the ddp group to sync across.
- add_dataloader_idx¶ (bool) – if
True
, appends the index of the current dataloader to the name (when using multiple). IfFalse
, user needs to give unique names for each dataloader to not mix values. - batch_size¶ (Optional[int]) – Current batch size. This will be directly inferred from the loaded batch, but some data structures might need to explicitly provide it.
- rank_zero_only¶ (bool) – Tells Lightning if you are calling
self.log
from every process (default) or only from rank 0. IfTrue
, you won’t be able to use this metric as a monitor in callbacks (e.g., early stopping). Warning: Improper use can lead to deadlocks! SeeAdvanced Logging for more details.
Return type:
lr_scheduler_step(scheduler, metric)[source]¶
Override this method to adjust the default way the Trainer calls each scheduler. By default, Lightning calls step()
and as shown in the example for each scheduler based on its interval
.
Parameters:
- scheduler¶ (Union[LRScheduler, ReduceLROnPlateau]) – Learning rate scheduler.
- metric¶ (Optional[Any]) – Value of the monitor used for schedulers like
ReduceLROnPlateau
.
Return type:
Examples:
DEFAULT
def lr_scheduler_step(self, scheduler, metric): if metric is None: scheduler.step() else: scheduler.step(metric)
Alternative way to update schedulers if it requires an epoch value
def lr_scheduler_step(self, scheduler, metric): scheduler.step(epoch=self.current_epoch)
Returns the learning rate scheduler(s) that are being used during training. Useful for manual optimization.
Return type:
Union[None, list[Union[LRScheduler, ReduceLROnPlateau]], LRScheduler, ReduceLROnPlateau]
Returns:
A single scheduler, or a list of schedulers in case multiple ones are present, or None
if no schedulers were returned in configure_optimizers().
manual_backward(loss, *args, **kwargs)[source]¶
Call this directly from your training_step() when doing optimizations manually. By using this, Lightning can ensure that all the proper scaling gets applied when using mixed precision.
See manual optimization for more examples.
Example:
def training_step(...): opt = self.optimizers() loss = ... opt.zero_grad() # automatically applies scaling, etc... self.manual_backward(loss) opt.step()
Parameters:
- loss¶ (Tensor) – The tensor on which to compute gradients. Must have a graph attached.
- *args¶ (Any) – Additional positional arguments to be forwarded to backward()
- **kwargs¶ (Any) – Additional keyword arguments to be forwarded to backward()
Return type:
optimizer_step(epoch, batch_idx, optimizer, optimizer_closure=None)[source]¶
Override this method to adjust the default way the Trainer calls the optimizer.
By default, Lightning calls step()
and zero_grad()
as shown in the example. This method (and zero_grad()
) won’t be called during the accumulation phase whenTrainer(accumulate_grad_batches != 1)
. Overriding this hook has no benefit with manual optimization.
Parameters:
- epoch¶ (int) – Current epoch
- batch_idx¶ (int) – Index of current batch
- optimizer¶ (Union[Optimizer, LightningOptimizer]) – A PyTorch optimizer
- optimizer_closure¶ (Optional[Callable[[], Any]]) – The optimizer closure. This closure must be executed as it includes the calls to
training_step()
,optimizer.zero_grad()
, andbackward()
.
Return type:
Examples:
def optimizer_step(self, epoch, batch_idx, optimizer, optimizer_closure):
# Add your custom logic to run directly before optimizer.step()
optimizer.step(closure=optimizer_closure)
# Add your custom logic to run directly after `optimizer.step()`
optimizer_zero_grad(epoch, batch_idx, optimizer)[source]¶
Override this method to change the default behaviour of optimizer.zero_grad()
.
Parameters:
- epoch¶ (int) – Current epoch
- batch_idx¶ (int) – Index of current batch
- optimizer¶ (Optimizer) – A PyTorch optimizer
Return type:
Examples:
DEFAULT
def optimizer_zero_grad(self, epoch, batch_idx, optimizer): optimizer.zero_grad()
Set gradients to None
instead of zero to improve performance (not required on torch>=2.0.0
).
def optimizer_zero_grad(self, epoch, batch_idx, optimizer): optimizer.zero_grad(set_to_none=True)
See torch.optim.Optimizer.zero_grad() for the explanation of the above example.
optimizers(use_pl_optimizer=True)[source]¶
Returns the optimizer(s) that are being used during training. Useful for manual optimization.
Parameters:
use_pl_optimizer¶ (bool) – If True
, will wrap the optimizer(s) in aLightningOptimizer for automatic handling of precision, profiling, and counting of step calls for proper logging and checkpointing. It specifically wraps thestep
method and custom optimizers that don’t have this method are not supported.
Return type:
Union[Optimizer, LightningOptimizer, _FabricOptimizer
, list[Optimizer], list[LightningOptimizer], list[_FabricOptimizer
]]
Returns:
A single optimizer, or a list of optimizers in case multiple ones are present.
predict_step(*args, **kwargs)[source]¶
Step function called during predict(). By default, it callsforward(). Override to add any processing logic.
The predict_step() is used to scale inference on multi-devices.
To prevent an OOM error, it is possible to use BasePredictionWritercallback to write the predictions to disk or database after each batch or on epoch end.
The BasePredictionWriter should be used while using a spawn based accelerator. This happens for Trainer(strategy="ddp_spawn")
or training on 8 TPU cores with Trainer(accelerator="tpu", devices=8)
as predictions won’t be returned.
Parameters:
- batch¶ – The output of your data iterable, normally a DataLoader.
- batch_idx¶ – The index of this batch.
- dataloader_idx¶ – The index of the dataloader that produced this batch. (only if multiple dataloaders used)
Return type:
Returns:
Predicted output (optional).
Example
class MyModel(LightningModule):
def predict_step(self, batch, batch_idx, dataloader_idx=0):
return self(batch)
dm = ... model = MyModel() trainer = Trainer(accelerator="gpu", devices=2) predictions = trainer.predict(model, dm)
print(*args, **kwargs)[source]¶
Prints only from process 0. Use this in any distributed mode to log only once.
Parameters:
- *args¶ (Any) – The thing to print. The same as for Python’s built-in print function.
- **kwargs¶ (Any) – The same as for Python’s built-in print function.
Return type:
Example:
def forward(self, x): self.print(x, 'in forward')
test_step(*args, **kwargs)[source]¶
Operates on a single batch of data from the test set. In this step you’d normally generate examples or calculate anything of interest such as accuracy.
Parameters:
- batch¶ – The output of your data iterable, normally a DataLoader.
- batch_idx¶ – The index of this batch.
- dataloader_idx¶ – The index of the dataloader that produced this batch. (only if multiple dataloaders used)
Return type:
Union[Tensor, Mapping[str, Any], None]
Returns:
- Tensor - The loss tensor
dict
- A dictionary. Can include any keys, but must include the key'loss'
.None
- Skip to the next batch.
if you have one test dataloader:
def test_step(self, batch, batch_idx): ...
if you have multiple test dataloaders:
def test_step(self, batch, batch_idx, dataloader_idx=0): ...
Examples:
CASE 1: A single test dataset
def test_step(self, batch, batch_idx): x, y = batch
# implement your own
out = self(x)
loss = self.loss(out, y)
# log 6 example images
# or generated text... or whatever
sample_imgs = x[:6]
grid = torchvision.utils.make_grid(sample_imgs)
self.logger.experiment.add_image('example_images', grid, 0)
# calculate acc
labels_hat = torch.argmax(out, dim=1)
test_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0)
# log the outputs!
self.log_dict({'test_loss': loss, 'test_acc': test_acc})
If you pass in multiple test dataloaders, test_step() will have an additional argument. We recommend setting the default value of 0 so that you can quickly switch between single and multiple dataloaders.
CASE 2: multiple test dataloaders
def test_step(self, batch, batch_idx, dataloader_idx=0): # dataloader_idx tells you which dataset this is. ...
Note
If you don’t need to test you don’t need to implement this method.
Note
When the test_step() is called, the model has been put in eval mode and PyTorch gradients have been disabled. At the end of the test epoch, the model goes back to training mode and gradients are enabled.
to_onnx(file_path, input_sample=None, **kwargs)[source]¶
Saves the model in ONNX format.
Parameters:
- file_path¶ (Union[str, Path, BytesIO]) – The path of the file the onnx model should be saved to.
- input_sample¶ (Optional[Any]) – An input for tracing. Default: None (Use self.example_input_array)
- **kwargs¶ (Any) – Will be passed to torch.onnx.export function.
Return type:
Example:
class SimpleModel(LightningModule): def init(self): super().init() self.l1 = torch.nn.Linear(in_features=64, out_features=4)
def forward(self, x):
return torch.relu(self.l1(x.view(x.size(0), -1)
model = SimpleModel() input_sample = torch.randn(1, 64) model.to_onnx("export.onnx", input_sample, export_params=True)
to_torchscript(file_path=None, method='script', example_inputs=None, **kwargs)[source]¶
By default compiles the whole model to a ScriptModule. If you want to use tracing, please provided the argument method='trace'
and make sure that either the example_inputs argument is provided, or the model has example_input_array set. If you would like to customize the modules that are scripted you should override this method. In case you want to return multiple modules, we recommend using a dictionary.
Parameters:
- file_path¶ (Union[str, Path, None]) – Path where to save the torchscript. Default: None (no file saved).
- method¶ (Optional[str]) – Whether to use TorchScript’s script or trace method. Default: ‘script’
- example_inputs¶ (Optional[Any]) – An input to be used to do tracing when method is set to ‘trace’. Default: None (uses example_input_array)
- **kwargs¶ (Any) – Additional arguments that will be passed to the torch.jit.script() ortorch.jit.trace() function.
Note
- Requires the implementation of theforward() method.
- The exported script will be set to evaluation mode.
- It is recommended that you install the latest supported version of PyTorch to use this feature without limitations. See also the torch.jitdocumentation for supported features.
Example:
class SimpleModel(LightningModule): def init(self): super().init() self.l1 = torch.nn.Linear(in_features=64, out_features=4)
def forward(self, x):
return torch.relu(self.l1(x.view(x.size(0), -1)))
model = SimpleModel() model.to_torchscript(file_path="model.pt")
torch.jit.save(model.to_torchscript( file_path="model_trace.pt", method='trace', example_inputs=torch.randn(1, 64)) )
Return type:
Union[ScriptModule
, dict[str, ScriptModule
]]
Returns:
This LightningModule as a torchscript, regardless of whether file_path is defined or not.
toggle_optimizer(optimizer)[source]¶
Makes sure only the gradients of the current optimizer’s parameters are calculated in the training step to prevent dangling gradients in multiple-optimizer setup.
It works with untoggle_optimizer() to make sure param_requires_grad_state
is properly reset.
Parameters:
optimizer¶ (Union[Optimizer, LightningOptimizer]) – The optimizer to toggle.
Return type:
training_step(*args, **kwargs)[source]¶
Here you compute and return the training loss and some additional metrics for e.g. the progress bar or logger.
Parameters:
- batch¶ – The output of your data iterable, normally a DataLoader.
- batch_idx¶ – The index of this batch.
- dataloader_idx¶ – The index of the dataloader that produced this batch. (only if multiple dataloaders used)
Return type:
Union[Tensor, Mapping[str, Any], None]
Returns:
- Tensor - The loss tensor
dict
- A dictionary which can include any keys, but must include the key'loss'
in the case of automatic optimization.None
- In automatic optimization, this will skip to the next batch (but is not supported for multi-GPU, TPU, or DeepSpeed). For manual optimization, this has no special meaning, as returning the loss is not required.
In this step you’d normally do the forward pass and calculate the loss for a batch. You can also do fancier things like multiple forward passes or something model specific.
Example:
def training_step(self, batch, batch_idx): x, y, z = batch out = self.encoder(x) loss = self.loss(out, x) return loss
To use multiple optimizers, you can switch to ‘manual optimization’ and control their stepping:
def init(self): super().init() self.automatic_optimization = False
Multiple optimizers (e.g.: GANs)
def training_step(self, batch, batch_idx): opt1, opt2 = self.optimizers()
# do training_step with encoder
...
opt1.step()
# do training_step with decoder
...
opt2.step()
Note
When accumulate_grad_batches
> 1, the loss returned here will be automatically normalized by accumulate_grad_batches
internally.
Unfreeze all parameters for training.
model = MyLightningModule(...) model.unfreeze()
Return type:
untoggle_optimizer(optimizer)[source]¶
Resets the state of required gradients that were toggled with toggle_optimizer().
Parameters:
optimizer¶ (Union[Optimizer, LightningOptimizer]) – The optimizer to untoggle.
Return type:
validation_step(*args, **kwargs)[source]¶
Operates on a single batch of data from the validation set. In this step you’d might generate examples or calculate anything of interest like accuracy.
Parameters:
- batch¶ – The output of your data iterable, normally a DataLoader.
- batch_idx¶ – The index of this batch.
- dataloader_idx¶ – The index of the dataloader that produced this batch. (only if multiple dataloaders used)
Return type:
Union[Tensor, Mapping[str, Any], None]
Returns:
- Tensor - The loss tensor
dict
- A dictionary. Can include any keys, but must include the key'loss'
.None
- Skip to the next batch.
if you have one val dataloader:
def validation_step(self, batch, batch_idx): ...
if you have multiple val dataloaders:
def validation_step(self, batch, batch_idx, dataloader_idx=0): ...
Examples:
CASE 1: A single validation dataset
def validation_step(self, batch, batch_idx): x, y = batch
# implement your own
out = self(x)
loss = self.loss(out, y)
# log 6 example images
# or generated text... or whatever
sample_imgs = x[:6]
grid = torchvision.utils.make_grid(sample_imgs)
self.logger.experiment.add_image('example_images', grid, 0)
# calculate acc
labels_hat = torch.argmax(out, dim=1)
val_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0)
# log the outputs!
self.log_dict({'val_loss': loss, 'val_acc': val_acc})
If you pass in multiple val dataloaders, validation_step() will have an additional argument. We recommend setting the default value of 0 so that you can quickly switch between single and multiple dataloaders.
CASE 2: multiple validation dataloaders
def validation_step(self, batch, batch_idx, dataloader_idx=0): # dataloader_idx tells you which dataset this is. ...
Note
If you don’t need to validate you don’t need to implement this method.
Note
When the validation_step() is called, the model has been put in eval mode and PyTorch gradients have been disabled. At the end of validation, the model goes back to training mode and gradients are enabled.
property automatic_optimization_: bool_¶
If set to False
you are responsible for calling .backward()
, .step()
, .zero_grad()
.
property current_epoch_: int_¶
The current epoch in the Trainer
, or 0 if not attached.
property device_mesh_: Optional[DeviceMesh]_¶
Strategies like ModelParallelStrategy
will create a device mesh that can be accessed in theconfigure_model() hook to parallelize the LightningModule.
property example_input_array_: Optional[Union[Tensor, tuple, dict]]_¶
The example input array is a specification of what the module can consume in the forward() method. The return type is interpreted as follows:
- Single tensor: It is assumed the model takes a single argument, i.e.,
model.forward(model.example_input_array)
- Tuple: The input array should be interpreted as a sequence of positional arguments, i.e.,
model.forward(*model.example_input_array)
- Dict: The input array represents named keyword arguments, i.e.,
model.forward(**model.example_input_array)
The index of the current process across all nodes and devices.
Total training batches seen across all epochs.
If no Trainer is attached, this propery is 0.
The index of the current process within a single node.
property logger_: Optional[Union[Logger, Logger]]_¶
Reference to the logger object in the Trainer.
property loggers_: Union[list[lightning.pytorch.loggers.logger.Logger], list[lightning.fabric.loggers.logger.Logger]]_¶
Reference to the list of loggers in the Trainer.
Returns True
if this model is currently located on a GPU.
Useful to set flags around the LightningModule for different CPU vs GPU behavior.
property strict_loading_: bool_¶
Determines how Lightning loads this model using .load_state_dict(…, strict=model.strict_loading).