torch.nn.functional.cross_entropy — PyTorch 2.7 documentation (original) (raw)

torch.nn.functional.cross_entropy(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean', label_smoothing=0.0)[source][source]

Compute the cross entropy loss between input logits and target.

See CrossEntropyLoss for details.

Parameters

Return type

Tensor

Shape:

where:

C=number of classesN=batch size\begin{aligned} C ={} & \text{number of classes} \\ N ={} & \text{batch size} \\ \end{aligned}

Examples:

Example of target with class indices

input = torch.randn(3, 5, requires_grad=True) target = torch.randint(5, (3,), dtype=torch.int64) loss = F.cross_entropy(input, target) loss.backward()

Example of target with class probabilities

input = torch.randn(3, 5, requires_grad=True) target = torch.randn(3, 5).softmax(dim=1) loss = F.cross_entropy(input, target) loss.backward()