torch.cuda.memory_reserved — PyTorch 2.0 documentation (original) (raw)

Shortcuts

torch.cuda.memory_reserved(device=None)[source]

Returns the current GPU memory managed by the caching allocator in bytes for a given device.

Parameters:

device (torch.device or int, optional) – selected device. Returns statistic for the current device, given by current_device(), if device is None (default).

Return type:

int

Note

See Memory management for more details about GPU memory management.

To analyze traffic and optimize your experience, we serve cookies on this site. By clicking or navigating, you agree to allow our usage of cookies. As the current maintainers of this site, Facebook’s Cookies Policy applies. Learn more, including about available controls: Cookies Policy.