torch.cuda.max_memory_reserved β PyTorch 2.7 documentation (original) (raw)
torch.cuda.max_memory_reserved(device=None)[source][source]ΒΆ
Return the maximum GPU memory managed by the caching allocator in bytes for a given device.
By default, this returns the peak cached memory since the beginning of this program. reset_peak_memory_stats() can be used to reset the starting point in tracking this metric. For example, these two functions can measure the peak cached memory amount of each iteration in a training loop.
Parameters
device (torch.device or int, optional) β selected device. Returns statistic for the current device, given by current_device(), if device is None
(default).
Return type