Tensors — PyTorch Tutorials 2.7.0+cu126 documentation (original) (raw)

beginner/blitz/tensor_tutorial

Run in Google Colab

Colab

Download Notebook

Notebook

View on GitHub

GitHub

Note

Click hereto download the full example code

Created On: Mar 24, 2017 | Last Updated: Jan 16, 2024 | Last Verified: Nov 05, 2024

Tensors are a specialized data structure that are very similar to arrays and matrices. In PyTorch, we use tensors to encode the inputs and outputs of a model, as well as the model’s parameters.

Tensors are similar to NumPy’s ndarrays, except that tensors can run on GPUs or other specialized hardware to accelerate computing. If you’re familiar with ndarrays, you’ll be right at home with the Tensor API. If not, follow along in this quick API walkthrough.

import torch import numpy as np

Tensor Initialization

Tensors can be initialized in various ways. Take a look at the following examples:

Directly from data

Tensors can be created directly from data. The data type is automatically inferred.

From a NumPy array

Tensors can be created from NumPy arrays (and vice versa - see Bridge with NumPy).

From another tensor:

The new tensor retains the properties (shape, datatype) of the argument tensor, unless explicitly overridden.

Ones Tensor: tensor([[1, 1], [1, 1]])

Random Tensor: tensor([[0.8823, 0.9150], [0.3829, 0.9593]])

With random or constant values:

shape is a tuple of tensor dimensions. In the functions below, it determines the dimensionality of the output tensor.

Random Tensor: tensor([[0.3904, 0.6009, 0.2566], [0.7936, 0.9408, 0.1332]])

Ones Tensor: tensor([[1., 1., 1.], [1., 1., 1.]])

Zeros Tensor: tensor([[0., 0., 0.], [0., 0., 0.]])


Tensor Attributes

Tensor attributes describe their shape, datatype, and the device on which they are stored.

Shape of tensor: torch.Size([3, 4]) Datatype of tensor: torch.float32 Device tensor is stored on: cpu


Tensor Operations

Over 100 tensor operations, including transposing, indexing, slicing, mathematical operations, linear algebra, random sampling, and more are comprehensively describedhere.

Each of them can be run on the GPU (at typically higher speeds than on a CPU). If you’re using Colab, allocate a GPU by going to Edit > Notebook Settings.

Device tensor is stored on: cuda:0

Try out some of the operations from the list. If you’re familiar with the NumPy API, you’ll find the Tensor API a breeze to use.

Standard numpy-like indexing and slicing:

tensor([[1., 0., 1., 1.], [1., 0., 1., 1.], [1., 0., 1., 1.], [1., 0., 1., 1.]])

Joining tensors You can use torch.cat to concatenate a sequence of tensors along a given dimension. See also torch.stack, another tensor joining op that is subtly different from torch.cat.

tensor([[1., 0., 1., 1., 1., 0., 1., 1., 1., 0., 1., 1.], [1., 0., 1., 1., 1., 0., 1., 1., 1., 0., 1., 1.], [1., 0., 1., 1., 1., 0., 1., 1., 1., 0., 1., 1.], [1., 0., 1., 1., 1., 0., 1., 1., 1., 0., 1., 1.]])

Multiplying tensors

This computes the element-wise product

print(f"tensor.mul(tensor) \n {tensor.mul(tensor)} \n")

Alternative syntax:

print(f"tensor * tensor \n {tensor * tensor}")

tensor.mul(tensor) tensor([[1., 0., 1., 1.], [1., 0., 1., 1.], [1., 0., 1., 1.], [1., 0., 1., 1.]])

tensor * tensor tensor([[1., 0., 1., 1.], [1., 0., 1., 1.], [1., 0., 1., 1.], [1., 0., 1., 1.]])

This computes the matrix multiplication between two tensors

tensor.matmul(tensor.T) tensor([[3., 3., 3., 3.], [3., 3., 3., 3.], [3., 3., 3., 3.], [3., 3., 3., 3.]])

tensor @ tensor.T tensor([[3., 3., 3., 3.], [3., 3., 3., 3.], [3., 3., 3., 3.], [3., 3., 3., 3.]])

In-place operationsOperations that have a _ suffix are in-place. For example: x.copy_(y), x.t_(), will change x.

tensor([[1., 0., 1., 1.], [1., 0., 1., 1.], [1., 0., 1., 1.], [1., 0., 1., 1.]])

tensor([[6., 5., 6., 6.], [6., 5., 6., 6.], [6., 5., 6., 6.], [6., 5., 6., 6.]])

Note

In-place operations save some memory, but can be problematic when computing derivatives because of an immediate loss of history. Hence, their use is discouraged.


Bridge with NumPy

Tensors on the CPU and NumPy arrays can share their underlying memory locations, and changing one will change the other.

Tensor to NumPy array

t = torch.ones(5) print(f"t: {t}") n = t.numpy() print(f"n: {n}")

t: tensor([1., 1., 1., 1., 1.]) n: [1. 1. 1. 1. 1.]

A change in the tensor reflects in the NumPy array.

t.add_(1) print(f"t: {t}") print(f"n: {n}")

t: tensor([2., 2., 2., 2., 2.]) n: [2. 2. 2. 2. 2.]

NumPy array to Tensor

Changes in the NumPy array reflects in the tensor.

np.add(n, 1, out=n) print(f"t: {t}") print(f"n: {n}")

t: tensor([2., 2., 2., 2., 2.], dtype=torch.float64) n: [2. 2. 2. 2. 2.]

Total running time of the script: ( 0 minutes 0.169 seconds)

Gallery generated by Sphinx-Gallery