torch.Storage — PyTorch 2.7 documentation (original) (raw)

In PyTorch, a regular tensor is a multi-dimensional array that is defined by the following components:

These components together define the structure and data of a tensor, with the storage holding the actual data and the rest serving as metadata.

Untyped Storage API

A torch.UntypedStorage is a contiguous, one-dimensional array of elements. Its length is equal to the number of bytes of the tensor. The storage serves as the underlying data container for tensors. In general, a tensor created in PyTorch using regular constructors such as zeros(), zeros_like()or new_zeros() will produce tensors where there is a one-to-one correspondence between the tensor storage and the tensor itself.

However, a storage is allowed to be shared by multiple tensors. For instance, any view of a tensor (obtained through view() or some, but not all, kinds of indexing like integers and slices) will point to the same underlying storage as the original tensor. When serializing and deserializing tensors that share a common storage, the relationship is preserved, and the tensors continue to point to the same storage. Interestingly, deserializing multiple tensors that point to a single storage can be faster than deserializing multiple independent tensors.

A tensor storage can be accessed through the untyped_storage() method. This will return an object of type torch.UntypedStorage. Fortunately, storages have a unique identifier called accessed through the torch.UntypedStorage.data_ptr() method. In regular settings, two tensors with the same data storage will have the same storage data_ptr. However, tensors themselves can point to two separate storages, one for its data attribute and another for its grad attribute. Each will require a data_ptr() of its own. In general, there is no guarantee that atorch.Tensor.data_ptr() and torch.UntypedStorage.data_ptr() match and this should not be assumed to be true.

Untyped storages are somewhat independent of the tensors that are built on them. Practically, this means that tensors with different dtypes or shape can point to the same storage. It also implies that a tensor storage can be changed, as the following example shows:

t = torch.ones(3) s0 = t.untyped_storage() s0 0 0 128 63 0 0 128 63 0 0 128 63 [torch.storage.UntypedStorage(device=cpu) of size 12] s1 = s0.clone() s1.fill_(0) 0 0 0 0 0 0 0 0 0 0 0 0 [torch.storage.UntypedStorage(device=cpu) of size 12]

Fill the tensor with a zeroed storage

t.set_(s1, storage_offset=t.storage_offset(), stride=t.stride(), size=t.size()) tensor([0., 0., 0.])

Warning

Please note that directly modifying a tensor’s storage as shown in this example is not a recommended practice. This low-level manipulation is illustrated solely for educational purposes, to demonstrate the relationship between tensors and their underlying storages. In general, it’s more efficient and safer to use standard torch.Tensormethods, such as clone() and fill_(), to achieve the same results.

Other than data_ptr, untyped storage also have other attributes such as filename(in case the storage points to a file on disk), device oris_cuda for device checks. A storage can also be manipulated in-place or out-of-place with methods like copy_, fill_ orpin_memory. FOr more information, check the API reference below. Keep in mind that modifying storages is a low-level API and comes with risks! Most of these APIs also exist on the tensor level: if present, they should be prioritized over their storage counterparts.

Special cases

We mentioned that a tensor that has a non-None grad attribute has actually two pieces of data within it. In this case, untyped_storage() will return the storage of the data attribute, whereas the storage of the gradient can be obtained through tensor.grad.untyped_storage().

t = torch.zeros(3, requires_grad=True) t.sum().backward() assert list(t.untyped_storage()) == [0] * 12 # the storage of the tensor is just 0s assert list(t.grad.untyped_storage()) != [0] * 12 # the storage of the gradient isn't

There are also special cases where tensors do not have a typical storage, or no storage at all:

Tensor subclasses or tensor-like objects can also display unusual behaviours. In general, we do not expect many use cases to require operating at the Storage level!

class torch.UntypedStorage(*args, **kwargs)[source][source]

bfloat16()[source]

Casts this storage to bfloat16 type.

bool()[source]

Casts this storage to bool type.

byte()[source]

Casts this storage to byte type.

byteswap(dtype)[source]

Swap bytes in underlying data.

char()[source]

Casts this storage to char type.

clone()[source]

Return a copy of this storage.

complex_double()[source]

Casts this storage to complex double type.

complex_float()[source]

Casts this storage to complex float type.

copy_()

cpu()[source]

Return a CPU copy of this storage if it’s not already on the CPU.

cuda(device=None, non_blocking=False)[source]

Returns a copy of this object in CUDA memory.

If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned.

Parameters

Return type

Union[__StorageBase_, TypedStorage]

data_ptr()

device_: device_

double()[source]

Casts this storage to double type.

element_size()

property filename_: Optional[str]_

Returns the file name associated with this storage.

The file name will be a string if the storage is on CPU and was created viafrom_file() with shared as True. This attribute is None otherwise.

fill_()

float()[source]

Casts this storage to float type.

float8_e4m3fn()[source]

Casts this storage to float8_e4m3fn type

float8_e4m3fnuz()[source]

Casts this storage to float8_e4m3fnuz type

float8_e5m2()[source]

Casts this storage to float8_e5m2 type

float8_e5m2fnuz()[source]

Casts this storage to float8_e5m2fnuz type

static from_buffer()

static from_file(filename, shared=False, size=0) → Storage

Creates a CPU storage backed by a memory-mapped file.

If shared is True, then memory is shared between all processes. All changes are written to the file. If shared is False, then the changes on the storage do not affect the file.

size is the number of elements in the storage. If shared is False, then the file must contain at least size * sizeof(Type) bytes (Type is the type of storage, in the case of an UnTypedStorage the file must contain at least size bytes). If shared is True the file will be created if needed.

Parameters

get_device()[source]

Return type

int

half()[source]

Casts this storage to half type.

hpu(device=None, non_blocking=False)[source]

Returns a copy of this object in HPU memory.

If this object is already in HPU memory and on the correct device, then no copy is performed and the original object is returned.

Parameters

Return type

Union[__StorageBase_, TypedStorage]

int()[source]

Casts this storage to int type.

property is_cuda

property is_hpu

is_pinned(device='cuda')[source]

Determine whether the CPU storage is already pinned on device.

Parameters

device (str or torch.device) – The device to pin memory on (default: 'cuda'). This argument is discouraged and subject to deprecated.

Returns

A boolean variable.

is_shared()

is_sparse_: bool_ = False

is_sparse_csr_: bool_ = False

long()[source]

Casts this storage to long type.

mps()[source]

Return a MPS copy of this storage if it’s not already on the MPS.

nbytes()

new()

pin_memory(device='cuda')[source]

Copy the CPU storage to pinned memory, if it’s not already pinned.

Parameters

device (str or torch.device) – The device to pin memory on (default: 'cuda'). This argument is discouraged and subject to deprecated.

Returns

A pinned CPU storage.

resizable()

resize_()

Moves the storage to shared memory.

This is a no-op for storages already in shared memory and for CUDA storages, which do not need to be moved for sharing across processes. Storages in shared memory cannot be resized.

Note that to mitigate issues like thisit is thread safe to call this function from multiple threads on the same object. It is NOT thread safe though to call any other function on self without proper synchronization. Please see Multiprocessing best practices for more details.

Note

When all references to a storage in shared memory are deleted, the associated shared memory object will also be deleted. PyTorch has a special cleanup process to ensure that this happens even if the current process exits unexpectedly.

It is worth noting the difference between share_memory_() and from_file() with shared = True

  1. share_memory_ uses shm_open(3) to create a POSIX shared memory object while from_file() usesopen(2) to open the filename passed by the user.
  2. Both use an mmap(2) call with MAP_SHAREDto map the file/object into the current virtual address space
  3. share_memory_ will call shm_unlink(3) on the object after mapping it to make sure the shared memory object is freed when no process has the object open. torch.from_file(shared=True) does not unlink the file. This file is persistent and will remain until it is deleted by the user.

Returns

self

short()[source]

Casts this storage to short type.

size()[source]

Return type

int

to(*, device, non_blocking=False)[source]

tolist()[source]

Return a list containing the elements of this storage.

type(dtype=None, non_blocking=False)[source]

Return type

Union[__StorageBase_, TypedStorage]

untyped()[source]

Legacy Typed Storage

Warning

For historical context, PyTorch previously used typed storage classes, which are now deprecated and should be avoided. The following details this API in case you should encounter it, although its usage is highly discouraged. All storage classes except for torch.UntypedStorage will be removed in the future, and torch.UntypedStorage will be used in all cases.

torch.Storage is an alias for the storage class that corresponds with the default data type (torch.get_default_dtype()). For example, if the default data type is torch.float, torch.Storage resolves totorch.FloatStorage.

The torch.<type>Storage and torch.cuda.<type>Storage classes, like torch.FloatStorage, torch.IntStorage, etc., are not actually ever instantiated. Calling their constructors creates a torch.TypedStorage with the appropriate torch.dtype andtorch.device. torch.<type>Storage classes have all of the same class methods that torch.TypedStorage has.

A torch.TypedStorage is a contiguous, one-dimensional array of elements of a particular torch.dtype. It can be given anytorch.dtype, and the internal data will be interpreted appropriately.torch.TypedStorage contains a torch.UntypedStorage which holds the data as an untyped array of bytes.

Every strided torch.Tensor contains a torch.TypedStorage, which stores all of the data that the torch.Tensor views.

class torch.TypedStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source][source]

bfloat16()[source][source]

Casts this storage to bfloat16 type.

bool()[source][source]

Casts this storage to bool type.

byte()[source][source]

Casts this storage to byte type.

char()[source][source]

Casts this storage to char type.

clone()[source][source]

Return a copy of this storage.

complex_double()[source][source]

Casts this storage to complex double type.

complex_float()[source][source]

Casts this storage to complex float type.

copy_(source, non_blocking=None)[source][source]

cpu()[source][source]

Return a CPU copy of this storage if it’s not already on the CPU.

cuda(device=None, non_blocking=False)[source][source]

Returns a copy of this object in CUDA memory.

If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned.

Parameters

Return type

Self

data_ptr()[source][source]

property device

double()[source][source]

Casts this storage to double type.

dtype_: dtype_

element_size()[source][source]

property filename_: Optional[str]_

Returns the file name associated with this storage if the storage was memory mapped from a file. or None if the storage was not created by memory mapping a file.

fill_(value)[source][source]

float()[source][source]

Casts this storage to float type.

float8_e4m3fn()[source][source]

Casts this storage to float8_e4m3fn type

float8_e4m3fnuz()[source][source]

Casts this storage to float8_e4m3fnuz type

float8_e5m2()[source][source]

Casts this storage to float8_e5m2 type

float8_e5m2fnuz()[source][source]

Casts this storage to float8_e5m2fnuz type

classmethod from_buffer(*args, **kwargs)[source][source]

classmethod from_file(filename, shared=False, size=0) → Storage[source][source]

Creates a CPU storage backed by a memory-mapped file.

If shared is True, then memory is shared between all processes. All changes are written to the file. If shared is False, then the changes on the storage do not affect the file.

size is the number of elements in the storage. If shared is False, then the file must contain at least size * sizeof(Type) bytes (Type is the type of storage). If shared is True the file will be created if needed.

Parameters

get_device()[source][source]

Return type

int

half()[source][source]

Casts this storage to half type.

hpu(device=None, non_blocking=False)[source][source]

Returns a copy of this object in HPU memory.

If this object is already in HPU memory and on the correct device, then no copy is performed and the original object is returned.

Parameters

Return type

Self

int()[source][source]

Casts this storage to int type.

property is_cuda

property is_hpu

is_pinned(device='cuda')[source][source]

Determine whether the CPU TypedStorage is already pinned on device.

Parameters

device (str or torch.device) – The device to pin memory on (default: 'cuda'). This argument is discouraged and subject to deprecated.

Returns

A boolean variable.

is_shared()[source][source]

is_sparse_: bool_ = False

long()[source][source]

Casts this storage to long type.

nbytes()[source][source]

pickle_storage_type()[source][source]

pin_memory(device='cuda')[source][source]

Copy the CPU TypedStorage to pinned memory, if it’s not already pinned.

Parameters

device (str or torch.device) – The device to pin memory on (default: 'cuda'). This argument is discouraged and subject to deprecated.

Returns

A pinned CPU storage.

resizable()[source][source]

resize_(size)[source][source]

See torch.UntypedStorage.share_memory_()

short()[source][source]

Casts this storage to short type.

size()[source][source]

to(*, device, non_blocking=False)[source][source]

Returns a copy of this object in device memory.

If this object is already on the correct device, then no copy is performed and the original object is returned.

Parameters

Return type

Self

tolist()[source][source]

Return a list containing the elements of this storage.

type(dtype=None, non_blocking=False)[source][source]

Returns the type if dtype is not provided, else casts this object to the specified type.

If this is already of the correct type, no copy is performed and the original object is returned.

Parameters

Return type

Union[__StorageBase_, TypedStorage, str]

untyped()[source][source]

Return the internal torch.UntypedStorage.

class torch.DoubleStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source][source]

dtype_: torch.dtype_ = torch.float64[source]

class torch.FloatStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source][source]

dtype_: torch.dtype_ = torch.float32[source]

class torch.HalfStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source][source]

dtype_: torch.dtype_ = torch.float16[source]

class torch.LongStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source][source]

dtype_: torch.dtype_ = torch.int64[source]

class torch.IntStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source][source]

dtype_: torch.dtype_ = torch.int32[source]

class torch.ShortStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source][source]

dtype_: torch.dtype_ = torch.int16[source]

class torch.CharStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source][source]

dtype_: torch.dtype_ = torch.int8[source]

class torch.ByteStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source][source]

dtype_: torch.dtype_ = torch.uint8[source]

class torch.BoolStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source][source]

dtype_: torch.dtype_ = torch.bool[source]

class torch.BFloat16Storage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source][source]

dtype_: torch.dtype_ = torch.bfloat16[source]

class torch.ComplexDoubleStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source][source]

dtype_: torch.dtype_ = torch.complex128[source]

class torch.ComplexFloatStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source][source]

dtype_: torch.dtype_ = torch.complex64[source]

class torch.QUInt8Storage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source][source]

dtype_: torch.dtype_ = torch.quint8[source]

class torch.QInt8Storage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source][source]

dtype_: torch.dtype_ = torch.qint8[source]

class torch.QInt32Storage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source][source]

dtype_: torch.dtype_ = torch.qint32[source]

class torch.QUInt4x2Storage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source][source]

dtype_: torch.dtype_ = torch.quint4x2[source]

class torch.QUInt2x4Storage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source][source]

dtype_: torch.dtype_ = torch.quint2x4[source]