MemPool — PyTorch 2.7 documentation (original) (raw)

class torch.cuda.MemPool(*args, **kwargs)[source][source]

MemPool represents a pool of memory in a caching allocator. Currently, it’s just the ID of the pool object maintained in the CUDACachingAllocator.

Parameters

allocator (torch._C._cuda_CUDAAllocator , optional) – a torch._C._cuda_CUDAAllocator object that can be used to define how memory gets allocated in the pool. If allocatoris None (default), memory allocation follows the default/ current configuration of the CUDACachingAllocator.

property allocator_: Optional[_cuda_CUDAAllocator]_

Returns the allocator this MemPool routes allocations to.

property id_: tuple[int, int]_

Returns the ID of this pool as a tuple of two ints.

snapshot()[source][source]

Return a snapshot of the CUDA memory allocator pool state across all devices.

Interpreting the output of this function requires familiarity with the memory allocator internals.

use_count()[source][source]

Returns the reference count of this pool.

Return type

int