torch.Tensor — PyTorch 2.7 documentation (original) (raw)
Returns a new Tensor with data
as the tensor data.
Returns a Tensor of size size
filled with fill_value
.
Returns a Tensor of size size
filled with uninitialized data.
Returns a Tensor of size size
filled with 1
.
Returns a Tensor of size size
filled with 0
.
Is True
if the Tensor is stored on the GPU, False
otherwise.
Is True
if the Tensor is quantized, False
otherwise.
Is True
if the Tensor is a meta tensor, False
otherwise.
Is the torch.device where this Tensor is.
This attribute is None
by default and becomes a Tensor the first time a call to backward()
computes gradients for self
.
Alias for dim()
Returns a new tensor containing real values of the self
tensor for a complex-valued input tensor.
Returns a new tensor containing imaginary values of the self
tensor.
Returns the number of bytes consumed by the "view" of elements of the Tensor if the Tensor does not use sparse storage layout.
Alias for element_size()
See torch.abs()
In-place version of abs()
Alias for abs()
In-place version of absolute() Alias for abs_()
See torch.acos()
In-place version of acos()
See torch.arccos()
In-place version of arccos()
Add a scalar or tensor to self
tensor.
In-place version of add()
See torch.addbmm()
In-place version of addbmm()
See torch.addcdiv()
In-place version of addcdiv()
See torch.addcmul()
In-place version of addcmul()
See torch.addmm()
In-place version of addmm()
See torch.sspaddmm()
See torch.addmv()
In-place version of addmv()
See torch.addr()
In-place version of addr()
Alias for adjoint()
See torch.allclose()
See torch.amax()
See torch.amin()
See torch.aminmax()
See torch.angle()
Applies the function callable
to each element in the tensor, replacing each element with the value returned by callable
.
See torch.argmax()
See torch.argmin()
See torch.argsort()
See torch.argwhere()
See torch.asin()
In-place version of asin()
See torch.arcsin()
In-place version of arcsin()
See torch.atan()
In-place version of atan()
See torch.arctan()
In-place version of arctan()
See torch.atan2()
In-place version of atan2()
See torch.arctan2()
atan2_(other) -> Tensor
See torch.all()
See torch.any()
Computes the gradient of current tensor wrt graph leaves.
See torch.baddbmm()
In-place version of baddbmm()
Returns a result tensor where each result[i]\texttt{result[i]} is independently sampled from Bernoulli(self[i])\text{Bernoulli}(\texttt{self[i]}).
Fills each location of self
with an independent sample from Bernoulli(p)\text{Bernoulli}(\texttt{p}).
self.bfloat16()
is equivalent to self.to(torch.bfloat16)
.
See torch.bincount()
In-place version of bitwise_not()
In-place version of bitwise_and()
In-place version of bitwise_or()
In-place version of bitwise_xor()
See torch.bitwise_left_shift()
In-place version of bitwise_left_shift()
See torch.bitwise_right_shift()
In-place version of bitwise_right_shift()
See torch.bmm()
self.bool()
is equivalent to self.to(torch.bool)
.
self.byte()
is equivalent to self.to(torch.uint8)
.
See torch.broadcast_to().
Fills the tensor with numbers drawn from the Cauchy distribution:
See torch.ceil()
In-place version of ceil()
self.char()
is equivalent to self.to(torch.int8)
.
See torch.cholesky()
See torch.chunk()
See torch.clamp()
In-place version of clamp()
Alias for clamp().
Alias for clamp_().
See torch.clone()
Returns a contiguous in memory tensor containing the same data as self
tensor.
Copies the elements from src
into self
tensor and returns self
.
See torch.conj()
In-place version of conj_physical()
See torch.copysign()
In-place version of copysign()
See torch.cos()
In-place version of cos()
See torch.cosh()
In-place version of cosh()
See torch.corrcoef()
See torch.cov()
See torch.acosh()
In-place version of acosh()
acosh() -> Tensor
acosh_() -> Tensor
Returns a copy of this object in CPU memory.
See torch.cross()
Returns a copy of this object in CUDA memory.
See torch.cummax()
See torch.cummin()
See torch.cumprod()
In-place version of cumprod()
See torch.cumsum()
In-place version of cumsum()
self.chalf()
is equivalent to self.to(torch.complex32)
.
self.cfloat()
is equivalent to self.to(torch.complex64)
.
self.cdouble()
is equivalent to self.to(torch.complex128)
.
Returns the address of the first element of self
tensor.
See torch.deg2rad()
Given a quantized Tensor, dequantize it and return the dequantized float Tensor.
See torch.det()
Return the number of dense dimensions in a sparse tensor self
.
Returns a new Tensor, detached from the current graph.
Detaches the Tensor from the graph that created it, making it a leaf.
See torch.diag()
See torch.diagflat()
See torch.diagonal()
Fill the main diagonal of a tensor that has at least 2-dimensions.
See torch.fmax()
See torch.fmin()
See torch.diff()
See torch.digamma()
In-place version of digamma()
Returns the number of dimensions of self
tensor.
Returns the uniquely determined tuple of int describing the dim order or physical layout of self
.
See torch.dist()
See torch.div()
In-place version of div()
See torch.divide()
In-place version of divide()
See torch.dot()
self.double()
is equivalent to self.to(torch.float64)
.
See torch.dsplit()
Returns the size in bytes of an individual element.
See torch.eq()
In-place version of eq()
See torch.equal()
See torch.erf()
In-place version of erf()
See torch.erfc()
In-place version of erfc()
See torch.erfinv()
In-place version of erfinv()
See torch.exp()
In-place version of exp()
See torch.expm1()
In-place version of expm1()
Returns a new view of the self
tensor with singleton dimensions expanded to a larger size.
Expand this tensor to the same size as other
.
Fills self
tensor with elements drawn from the PDF (probability density function):
See torch.fix().
In-place version of fix()
Fills self
tensor with the specified value.
See torch.flatten()
See torch.flip()
See torch.fliplr()
See torch.flipud()
self.float()
is equivalent to self.to(torch.float32)
.
In-place version of float_power()
See torch.floor()
In-place version of floor()
In-place version of floor_divide()
See torch.fmod()
In-place version of fmod()
See torch.frac()
In-place version of frac()
See torch.frexp()
See torch.gather()
See torch.gcd()
In-place version of gcd()
See torch.ge().
In-place version of ge().
In-place version of greater_equal().
Fills self
tensor with elements drawn from the geometric distribution:
See torch.geqrf()
See torch.ger()
For CUDA tensors, this function returns the device ordinal of the GPU on which the tensor resides.
See torch.gt().
In-place version of gt().
See torch.greater().
In-place version of greater().
self.half()
is equivalent to self.to(torch.float16)
.
See torch.nn.functional.hardshrink()
See torch.histc()
See torch.hsplit()
See torch.hypot()
In-place version of hypot()
See torch.i0()
In-place version of i0()
See torch.igamma()
In-place version of igamma()
See torch.igammac()
In-place version of igammac()
Accumulate the elements of alpha
times source
into the self
tensor by adding to the indices in the order given in index
.
Out-of-place version of torch.Tensor.index_add_().
Copies the elements of tensor into the self
tensor by selecting the indices in the order given in index
.
Out-of-place version of torch.Tensor.index_copy_().
Fills the elements of the self
tensor with value value
by selecting the indices in the order given in index
.
Out-of-place version of torch.Tensor.index_fill_().
Puts values from the tensor values
into the tensor self
using the indices specified in indices
(which is a tuple of Tensors).
Out-place version of index_put_().
Accumulate the elements of source
into the self
tensor by accumulating to the indices in the order given in index
using the reduction given by the reduce
argument.
Return the indices tensor of a sparse COO tensor.
See torch.inner().
self.int()
is equivalent to self.to(torch.int32)
.
Given a quantized Tensor, self.int_repr()
returns a CPU Tensor with uint8_t as data type that stores the underlying uint8_t values of the given Tensor.
See torch.inverse()
See torch.isclose()
See torch.isfinite()
See torch.isinf()
See torch.isposinf()
See torch.isneginf()
See torch.isnan()
Returns True if self
tensor is contiguous in memory in the order specified by memory format.
Returns True if the data type of self
is a complex data type.
Returns True if the conjugate bit of self
is set to true.
Returns True if the data type of self
is a floating point data type.
See torch.is_inference()
All Tensors that have requires_grad
which is False
will be leaf Tensors by convention.
Returns true if this tensor resides in pinned memory.
Returns True if both tensors are pointing to the exact same memory (same storage, offset, size and stride).
Checks if tensor is in shared memory.
Returns True if the data type of self
is a signed data type.
Is True
if the Tensor uses sparse COO storage layout, False
otherwise.
See torch.istft()
See torch.isreal()
Returns the value of this tensor as a standard Python number.
See torch.kthvalue()
See torch.lcm()
In-place version of lcm()
See torch.ldexp()
In-place version of ldexp()
See torch.le().
In-place version of le().
See torch.less_equal().
In-place version of less_equal().
See torch.lerp()
In-place version of lerp()
See torch.lgamma()
In-place version of lgamma()
See torch.log()
In-place version of log()
See torch.logdet()
See torch.log10()
In-place version of log10()
See torch.log1p()
In-place version of log1p()
See torch.log2()
In-place version of log2()
Fills self
tensor with numbers samples from the log-normal distribution parameterized by the given mean μ\mu and standard deviation σ\sigma.
In-place version of logical_and()
In-place version of logical_not()
In-place version of logical_or()
In-place version of logical_xor()
See torch.logit()
In-place version of logit()
self.long()
is equivalent to self.to(torch.int64)
.
See torch.lt().
In-place version of lt().
lt(other) -> Tensor
In-place version of less().
See torch.lu()
See torch.lu_solve()
Makes a cls
instance with the same data pointer as self
.
Applies callable
for each element in self
tensor and the given tensor and stores the results in self
tensor.
Copies elements from source
into self
tensor at positions where the mask
is True.
Out-of-place version of torch.Tensor.masked_scatter_()
Fills elements of self
tensor with value
where mask
is True.
Out-of-place version of torch.Tensor.masked_fill_()
See torch.matmul()
See torch.max()
See torch.maximum()
See torch.mean()
Defines how to transform other
when loading it into self
in load_state_dict().
See torch.nanmean()
See torch.median()
See torch.min()
See torch.minimum()
See torch.mm()
See torch.smm()
See torch.mode()
See torch.movedim()
See torch.moveaxis()
See torch.msort()
See torch.mul().
In-place version of mul().
See torch.multiply().
In-place version of multiply().
See torch.mv()
See torch.mvlgamma()
In-place version of mvlgamma()
See torch.nansum()
See torch.narrow().
See torch.narrow_copy().
Alias for dim()
See torch.nan_to_num().
In-place version of nan_to_num().
See torch.ne().
In-place version of ne().
See torch.not_equal().
In-place version of not_equal().
See torch.neg()
In-place version of neg()
See torch.negative()
In-place version of negative()
Alias for numel()
In-place version of nextafter()
See torch.nonzero()
See torch.norm()
Fills self
tensor with elements samples from the normal distribution parameterized by mean and std.
See torch.numel()
Returns the tensor as a NumPy ndarray
.
See torch.orgqr()
See torch.ormqr()
See torch.outer().
See torch.permute()
Copies the tensor to pinned memory, if it's not already pinned.
See torch.pinverse()
In-place version of polygamma()
See torch.positive()
See torch.pow()
In-place version of pow()
See torch.prod()
Copies the elements from source
into the positions specified by index
.
See torch.qr()
Returns the quantization scheme of a given QTensor.
See torch.quantile()
Given a Tensor quantized by linear(affine) quantization, returns the scale of the underlying quantizer().
Given a Tensor quantized by linear(affine) quantization, returns the zero_point of the underlying quantizer().
Given a Tensor quantized by linear (affine) per-channel quantization, returns a Tensor of scales of the underlying quantizer.
Tensor.q_per_channel_zero_points
Given a Tensor quantized by linear (affine) per-channel quantization, returns a tensor of zero_points of the underlying quantizer.
Given a Tensor quantized by linear (affine) per-channel quantization, returns the index of dimension on which per-channel quantization is applied.
See torch.rad2deg()
Fills self
tensor with numbers sampled from the discrete uniform distribution over [from, to - 1]
.
see torch.ravel()
In-place version of reciprocal()
Marks the tensor as having been used by this stream.
Registers a backward hook.
Tensor.register_post_accumulate_grad_hook
Registers a backward hook that runs after grad accumulation.
In-place version of remainder()
See torch.renorm()
In-place version of renorm()
Repeats this tensor along the specified dimensions.
See torch.repeat_interleave().
Is True
if gradients need to be computed for this Tensor, False
otherwise.
Change if autograd should record operations on this tensor: sets this tensor's requires_grad
attribute in-place.
Returns a tensor with the same data and number of elements as self
but with the specified shape.
Returns this tensor as the same shape as other
.
Resizes self
tensor to the specified size.
Resizes the self
tensor to be the same size as the specified tensor.
Enables this Tensor to have their grad
populated during backward()
.
Is True
if this Tensor is non-leaf and its grad
is enabled to be populated during backward()
, False
otherwise.
See torch.roll()
See torch.rot90()
See torch.round()
In-place version of round()
See torch.rsqrt()
In-place version of rsqrt()
Out-of-place version of torch.Tensor.scatter_()
Writes all values from the tensor src
into self
at the indices specified in the index
tensor.
Adds all values from the tensor src
into self
at the indices specified in the index
tensor in a similar fashion as scatter_().
Out-of-place version of torch.Tensor.scatter_add_()
Reduces all values from the src
tensor to the indices specified in the index
tensor in the self
tensor using the applied reduction defined via the reduce
argument ("sum"
, "prod"
, "mean"
, "amax"
, "amin"
).
Out-of-place version of torch.Tensor.scatter_reduce_()
See torch.select()
Sets the underlying storage, size, and strides.
Moves the underlying storage to shared memory.
self.short()
is equivalent to self.to(torch.int16)
.
See torch.sigmoid()
In-place version of sigmoid()
See torch.sign()
In-place version of sign()
See torch.signbit()
See torch.sgn()
In-place version of sgn()
See torch.sin()
In-place version of sin()
See torch.sinc()
In-place version of sinc()
See torch.sinh()
In-place version of sinh()
See torch.asinh()
In-place version of asinh()
See torch.arcsinh()
In-place version of arcsinh()
Returns the size of the self
tensor.
Returns the size of the self
tensor.
See torch.slogdet()
Alias for torch.nn.functional.softmax().
See torch.sort()
See torch.split()
Returns a new sparse tensor with values from a strided tensor self
filtered by the indices of the sparse tensor mask
.
Return the number of sparse dimensions in a sparse tensor self
.
See torch.sqrt()
In-place version of sqrt()
See torch.square()
In-place version of square()
See torch.squeeze()
In-place version of squeeze()
See torch.std()
See torch.stft()
Returns the underlying TypedStorage.
Returns the underlying UntypedStorage.
Returns self
tensor's offset in the underlying storage in terms of number of storage elements (not bytes).
Returns the type of the underlying storage.
Returns the stride of self
tensor.
See torch.sub().
In-place version of sub()
See torch.subtract().
In-place version of subtract().
See torch.sum()
Sum this
tensor to size
.
See torch.svd()
See torch.swapaxes()
See torch.swapdims()
See torch.t()
In-place version of t()
See torch.tile()
Performs Tensor dtype and/or device conversion.
Returns a copy of the tensor in torch.mkldnn
layout.
See torch.take()
See torch.tan()
In-place version of tan()
See torch.tanh()
In-place version of tanh()
See torch.atanh()
In-place version of atanh()
See torch.arctanh()
In-place version of arctanh()
Returns the tensor as a (nested) list.
See torch.topk()
Creates a strided copy of self
if self
is not a strided tensor, otherwise returns self
.
Returns a sparse copy of the tensor.
Convert a tensor to compressed row storage format (CSR).
Convert a tensor to compressed column storage (CSC) format.
Convert a tensor to a block sparse row (BSR) storage format of given blocksize.
Convert a tensor to a block sparse column (BSC) storage format of given blocksize.
See torch.trace()
In-place version of transpose()
See torch.tril()
In-place version of tril()
See torch.triu()
In-place version of triu()
In-place version of true_divide_()
See torch.trunc()
In-place version of trunc()
Returns the type if dtype is not provided, else casts this object to the specified type.
Returns this tensor cast to the type of the given tensor.
See torch.unbind()
See torch.unflatten().
Returns a view of the original tensor which contains all slices of size size
from self
tensor in the dimension dimension
.
Fills self
tensor with numbers sampled from the continuous uniform distribution:
Returns the unique elements of the input tensor.
Eliminates all but the first element from every consecutive group of equivalent elements.
In-place version of unsqueeze()
Return the values tensor of a sparse COO tensor.
See torch.var()
See torch.vdot()
Returns a new tensor with the same data as the self
tensor but of a different shape
.
View this tensor as the same size as other
.
See torch.vsplit()
self.where(condition, y)
is equivalent to torch.where(condition, self, y)
.
See torch.xlogy()
In-place version of xlogy()
Returns a copy of this object in XPU memory.
Fills self
tensor with zeros.