torch.cuda.reset_max_memory_cached — PyTorch 2.7 documentation (original) (raw)

Shortcuts

torch.cuda.reset_max_memory_cached(device=None)[source][source]

Reset the starting point in tracking maximum GPU memory managed by the caching allocator for a given device.

See max_memory_cached() for details.

Parameters

device (torch.device or int, optional) – selected device. Returns statistic for the current device, given by current_device(), if device is None (default).

Warning

This function now calls reset_peak_memory_stats(), which resets /all/ peak memory stats.

Note

See Memory management for more details about GPU memory management.

To analyze traffic and optimize your experience, we serve cookies on this site. By clicking or navigating, you agree to allow our usage of cookies. As the current maintainers of this site, Facebook’s Cookies Policy applies. Learn more, including about available controls: Cookies Policy.