torch.autograd.functional.jacobian — PyTorch 2.7 documentation (original) (raw)

torch.autograd.functional.jacobian(func, inputs, create_graph=False, strict=False, vectorize=False, strategy='reverse-mode')[source][source]

Compute the Jacobian of a given function.

Parameters

Returns

if there is a single input and output, this will be a single Tensor containing the Jacobian for the linearized inputs and output. If one of the two is a tuple, then the Jacobian will be a tuple of Tensors. If both of them are tuples, then the Jacobian will be a tuple of tuple of Tensors where Jacobian[i][j] will contain the Jacobian of theith output and jth input and will have as size the concatenation of the sizes of the corresponding output and the corresponding input and will have same dtype and device as the corresponding input. If strategy is forward-mode, the dtype will be that of the output; otherwise, the input.

Return type

Jacobian (Tensor or nested tuple of Tensors)

Example

def exp_reducer(x): ... return x.exp().sum(dim=1) inputs = torch.rand(2, 2) jacobian(exp_reducer, inputs) tensor([[[1.4917, 2.4352], [0.0000, 0.0000]], [[0.0000, 0.0000], [2.4369, 2.3799]]])

jacobian(exp_reducer, inputs, create_graph=True) tensor([[[1.4917, 2.4352], [0.0000, 0.0000]], [[0.0000, 0.0000], [2.4369, 2.3799]]], grad_fn=)

def exp_adder(x, y): ... return 2 * x.exp() + 3 * y inputs = (torch.rand(2), torch.rand(2)) jacobian(exp_adder, inputs) (tensor([[2.8052, 0.0000], [0.0000, 3.3963]]), tensor([[3., 0.], [0., 3.]]))