CosineEmbeddingLoss — PyTorch 2.7 documentation (original) (raw)

class torch.nn.CosineEmbeddingLoss(margin=0.0, size_average=None, reduce=None, reduction='mean')[source][source]

Creates a criterion that measures the loss given input tensorsx1x_1, x2x_2 and a Tensor label yy with values 1 or -1. Use (y=1y=1) to maximize the cosine similarity of two inputs, and (y=−1y=-1) otherwise. This is typically used for learning nonlinear embeddings or semi-supervised learning.

The loss function for each sample is:

loss(x,y)={1−cos⁡(x1,x2),if y=1max⁡(0,cos⁡(x1,x2)−margin),if y=−1\text{loss}(x, y) = \begin{cases} 1 - \cos(x_1, x_2), & \text{if } y = 1 \\ \max(0, \cos(x_1, x_2) - \text{margin}), & \text{if } y = -1 \end{cases}

Parameters

Shape:

Examples:

loss = nn.CosineEmbeddingLoss() input1 = torch.randn(3, 5, requires_grad=True) input2 = torch.randn(3, 5, requires_grad=True) target = torch.ones(3) output = loss(input1, input2, target) output.backward()