torch.Tensor.to (original) (raw)

Tensor.to(*args, **kwargs) → Tensor#

Performs Tensor dtype and/or device conversion. A torch.dtype and torch.device are inferred from the arguments of self.to(*args, **kwargs).

Note

If self requires gradients (requires_grad=True) but the targetdtype specified is an integer type, the returned tensor will implicitly set requires_grad=False. This is because only tensors with floating-point or complex dtypes can require gradients.

Here are the ways to call to:

to(dtype, non_blocking=False, copy=False, memory_format=torch.preserve_format) → Tensor

Returns a Tensor with the specified dtype

Args:

memory_format (torch.memory_format, optional): the desired memory format of returned Tensor. Default: torch.preserve_format.

Note

According to C++ type conversion rules, converting floating point value to integer type will truncate the fractional part. If the truncated value cannot fit into the target type (e.g., casting torch.inf to torch.long), the behavior is undefined and the result may vary across platforms.

torch.to(device=None, dtype=None, non_blocking=False, copy=False, memory_format=torch.preserve_format) → Tensor

Returns a Tensor with the specified device and (optional)dtype. If dtype is None it is inferred to be self.dtype. When non_blocking is set to True, the function attempts to perform the conversion asynchronously with respect to the host, if possible. This asynchronous behavior applies to both pinned and pageable memory. However, caution is advised when using this feature. For more information, refer to thetutorial on good usage of non_blocking and pin_memory. When copy is set, a new Tensor is created even when the Tensor already matches the desired conversion.

Args:

memory_format (torch.memory_format, optional): the desired memory format of returned Tensor. Default: torch.preserve_format.

torch.to(other, non_blocking=False, copy=False) → Tensor

Returns a Tensor with same torch.dtype and torch.device as the Tensor other. When non_blocking is set to True, the function attempts to perform the conversion asynchronously with respect to the host, if possible. This asynchronous behavior applies to both pinned and pageable memory. However, caution is advised when using this feature. For more information, refer to thetutorial on good usage of non_blocking and pin_memory. When copy is set, a new Tensor is created even when the Tensor already matches the desired conversion.

Example:

tensor = torch.randn(2, 2) # Initially dtype=float32, device=cpu tensor.to(torch.float64) tensor([[-0.5044, 0.0005], [ 0.3310, -0.0584]], dtype=torch.float64)

cuda0 = torch.device('cuda:0') tensor.to(cuda0) tensor([[-0.5044, 0.0005], [ 0.3310, -0.0584]], device='cuda:0')

tensor.to(cuda0, dtype=torch.float64) tensor([[-0.5044, 0.0005], [ 0.3310, -0.0584]], dtype=torch.float64, device='cuda:0')

other = torch.randn((), dtype=torch.float64, device=cuda0) tensor.to(other, non_blocking=True) tensor([[-0.5044, 0.0005], [ 0.3310, -0.0584]], dtype=torch.float64, device='cuda:0')