torch.autograd.grad — PyTorch 2.7 documentation (original) (raw)

torch.autograd.grad(outputs, inputs, grad_outputs=None, retain_graph=None, create_graph=False, only_inputs=True, allow_unused=None, is_grads_batched=False, materialize_grads=False)[source][source]

Compute and return the sum of gradients of outputs with respect to the inputs.

grad_outputs should be a sequence of length matching outputcontaining the “vector” in vector-Jacobian product, usually the pre-computed gradients w.r.t. each of the outputs. If an output doesn’t require_grad, then the gradient can be None).

Note

only_inputs argument is deprecated and is ignored now (defaults to True). To accumulate gradient for other parts of the graph, please usetorch.autograd.backward.

Parameters

Return type

tuple[torch.Tensor, …]