torch.func.jvp — PyTorch 2.7 documentation (original) (raw)

torch.func.jvp(func, primals, tangents, *, strict=False, has_aux=False)[source]

Standing for the Jacobian-vector product, returns a tuple containing the output of func(*primals) and the “Jacobian of func evaluated atprimals” times tangents. This is also known as forward-mode autodiff.

Parameters

Returns

Returns a (output, jvp_out) tuple containing the output of funcevaluated at primals and the Jacobian-vector product. If has_aux is True, then instead returns a (output, jvp_out, aux) tuple.

Note

You may see this API error out with “forward-mode AD not implemented for operator X”. If so, please file a bug report and we will prioritize it.

jvp is useful when you wish to compute gradients of a function R^1 -> R^N

from torch.func import jvp x = torch.randn([]) f = lambda x: x * torch.tensor([1., 2., 3]) value, grad = jvp(f, (x,), (torch.tensor(1.),)) assert torch.allclose(value, f(x)) assert torch.allclose(grad, torch.tensor([1., 2, 3]))

jvp() can support functions with multiple inputs by passing in the tangents for each of the inputs

from torch.func import jvp x = torch.randn(5) y = torch.randn(5) f = lambda x, y: (x * y) _, output = jvp(f, (x, y), (torch.ones(5), torch.ones(5))) assert torch.allclose(output, x + y)