torch.addbmm — PyTorch 2.7 documentation (original) (raw)

torch.addbmm(input, batch1, batch2, *, beta=1, alpha=1, out=None) → Tensor

Performs a batch matrix-matrix product of matrices stored in batch1 and batch2, with a reduced add step (all matrix multiplications get accumulated along the first dimension).input is added to the final result.

batch1 and batch2 must be 3-D tensors each containing the same number of matrices.

If batch1 is a (b×n×m)(b \times n \times m) tensor, batch2 is a(b×m×p)(b \times m \times p) tensor, input must bebroadcastable with a (n×p)(n \times p) tensor and out will be a (n×p)(n \times p) tensor.

out=β input+α (∑i=0b−1batch1i@batch2i)out = \beta\ \text{input} + \alpha\ (\sum_{i=0}^{b-1} \text{batch1}_i \mathbin{@} \text{batch2}_i)

If beta is 0, then the content of input will be ignored, and nan and inf in it will not be propagated.

For inputs of type FloatTensor or DoubleTensor, arguments beta and alphamust be real numbers, otherwise they should be integers.

This operator supports TensorFloat32.

On certain ROCm devices, when using float16 inputs this module will use different precision for backward.

Parameters

Keyword Arguments

Example:

M = torch.randn(3, 5) batch1 = torch.randn(10, 3, 4) batch2 = torch.randn(10, 4, 5) torch.addbmm(M, batch1, batch2) tensor([[ 6.6311, 0.0503, 6.9768, -12.0362, -2.1653], [ -4.8185, -1.4255, -6.6760, 8.9453, 2.5743], [ -3.8202, 4.3691, 1.0943, -1.1109, 5.4730]])