torch.cuda.memory_summary — PyTorch 2.0 documentation (original) (raw)
torch.cuda.memory_summary(device=None, abbreviated=False)[source]¶
Returns a human-readable printout of the current memory allocator statistics for a given device.
This can be useful to display periodically during training, or when handling out-of-memory exceptions.
Parameters:
- device (torch.device or int, optional) – selected device. Returns printout for the current device, given by current_device(), if device is
None
(default). - abbreviated (bool, optional) – whether to return an abbreviated summary (default: False).
Return type: