torch.cuda.caching_allocator_alloc — PyTorch 2.7 documentation (original) (raw)
torch.cuda.caching_allocator_alloc(size, device=None, stream=None)[source][source]¶
Perform a memory allocation using the CUDA memory allocator.
Memory is allocated for a given device and a stream, this function is intended to be used for interoperability with other frameworks. Allocated memory is released throughcaching_allocator_delete().
Parameters
- size (int) – number of bytes to be allocated.
- device (torch.device or int, optional) – selected device. If it is
None
the default CUDA device is used. - stream (torch.cuda.Stream or int, optional) – selected stream. If is
None
then the default stream for the selected device is used.