no_grad — PyTorch 2.7 documentation (original) (raw)

class torch.no_grad(orig_func=None)[source][source]

Context-manager that disables gradient calculation.

Disabling gradient calculation is useful for inference, when you are sure that you will not call Tensor.backward(). It will reduce memory consumption for computations that would otherwise have requires_grad=True.

In this mode, the result of every computation will haverequires_grad=False, even when the inputs have requires_grad=True. There is an exception! All factory functions, or functions that create a new Tensor and take a requires_grad kwarg, will NOT be affected by this mode.

This context manager is thread local; it will not affect computation in other threads.

Also functions as a decorator.

Note

This API does not apply to forward-mode AD. If you want to disable forward AD for a computation, you can unpack your dual tensors.

Example::

x = torch.tensor([1.], requires_grad=True) with torch.no_grad(): ... y = x * 2 y.requires_grad False @torch.no_grad() ... def doubler(x): ... return x * 2 z = doubler(x) z.requires_grad False @torch.no_grad() ... def tripler(x): ... return x * 3 z = tripler(x) z.requires_grad False

factory function exception

with torch.no_grad(): ... a = torch.nn.Parameter(torch.rand(10)) a.requires_grad True