CrossEntropyLoss — PyTorch 2.7 documentation (original) (raw)
class torch.nn.CrossEntropyLoss(weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean', label_smoothing=0.0)[source][source]¶
This criterion computes the cross entropy loss between input logits and target.
It is useful when training a classification problem with C classes. If provided, the optional argument weight
should be a 1D Tensorassigning weight to each of the classes. This is particularly useful when you have an unbalanced training set.
The input is expected to contain the unnormalized logits for each class (which do not need to be positive or sum to 1, in general).input has to be a Tensor of size (C)(C) for unbatched input,(minibatch,C)(minibatch, C) or (minibatch,C,d1,d2,...,dK)(minibatch, C, d_1, d_2, ..., d_K) with K≥1K \geq 1 for theK-dimensional case. The last being useful for higher dimension inputs, such as computing cross entropy loss per-pixel for 2D images.
The target that this criterion expects should contain either:
- Class indices in the range [0,C)[0, C) where CC is the number of classes; ifignore_index is specified, this loss also accepts this class index (this index may not necessarily be in the class range). The unreduced (i.e. with
reduction
set to'none'
) loss for this case can be described as:
ℓ(x,y)=L={l1,…,lN}⊤,ln=−wynlogexp(xn,yn)∑c=1Cexp(xn,c)⋅1{yn≠ignore_index}\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - w_{y_n} \log \frac{\exp(x_{n,y_n})}{\sum_{c=1}^C \exp(x_{n,c})} \cdot \mathbb{1}\{y_n \not= \text{ignore\_index}\}
where xx is the input, yy is the target, ww is the weight,CC is the number of classes, and NN spans the minibatch dimension as well asd1,...,dkd_1, ..., d_k for the K-dimensional case. Ifreduction
is not'none'
(default'mean'
), then
ℓ(x,y)={∑n=1N1∑n=1Nwyn⋅1{yn≠ignore_index}ln,if reduction=‘mean’;∑n=1Nln,if reduction=‘sum’.\ell(x, y) = \begin{cases} \sum_{n=1}^N \frac{1}{\sum_{n=1}^N w_{y_n} \cdot \mathbb{1}\{y_n \not= \text{ignore\_index}\}} l_n, & \text{if reduction} = \text{`mean';}\\ \sum_{n=1}^N l_n, & \text{if reduction} = \text{`sum'.} \end{cases}
Note that this case is equivalent to applying LogSoftmaxon an input, followed by NLLLoss. - Probabilities for each class; useful when labels beyond a single class per minibatch item are required, such as for blended labels, label smoothing, etc. The unreduced (i.e. with
reduction
set to'none'
) loss for this case can be described as:
ℓ(x,y)=L={l1,…,lN}⊤,ln=−∑c=1Cwclogexp(xn,c)∑i=1Cexp(xn,i)yn,c\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - \sum_{c=1}^C w_c \log \frac{\exp(x_{n,c})}{\sum_{i=1}^C \exp(x_{n,i})} y_{n,c}
where xx is the input, yy is the target, ww is the weight,CC is the number of classes, and NN spans the minibatch dimension as well asd1,...,dkd_1, ..., d_k for the K-dimensional case. Ifreduction
is not'none'
(default'mean'
), then
ℓ(x,y)={∑n=1NlnN,if reduction=‘mean’;∑n=1Nln,if reduction=‘sum’.\ell(x, y) = \begin{cases} \frac{\sum_{n=1}^N l_n}{N}, & \text{if reduction} = \text{`mean';}\\ \sum_{n=1}^N l_n, & \text{if reduction} = \text{`sum'.} \end{cases}
Note
The performance of this criterion is generally better when target contains class indices, as this allows for optimized computation. Consider providing target as class probabilities only when a single class label per minibatch item is too restrictive.
Parameters
- weight (Tensor, optional) – a manual rescaling weight given to each class. If given, has to be a Tensor of size C and floating point dtype
- size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored whenreduce
isFalse
. Default:True
- ignore_index (int, optional) – Specifies a target value that is ignored and does not contribute to the input gradient. When
size_average
isTrue
, the loss is averaged over non-ignored targets. Note thatignore_index
is only applicable when the target contains class indices. - reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
- reduction (str, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the weighted mean of the output is taken,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
- label_smoothing (float, optional) – A float in [0.0, 1.0]. Specifies the amount of smoothing when computing the loss, where 0.0 means no smoothing. The targets become a mixture of the original ground truth and a uniform distribution as described inRethinking the Inception Architecture for Computer Vision. Default: 0.00.0.
Shape:
- Input: Shape (C)(C), (N,C)(N, C) or (N,C,d1,d2,...,dK)(N, C, d_1, d_2, ..., d_K) with K≥1K \geq 1in the case of K-dimensional loss.
- Target: If containing class indices, shape ()(), (N)(N) or (N,d1,d2,...,dK)(N, d_1, d_2, ..., d_K) withK≥1K \geq 1 in the case of K-dimensional loss where each value should be between [0,C)[0, C). The target data type is required to be long when using class indices. If containing class probabilities, the target must be the same shape input, and each value should be between [0,1][0, 1]. This means the target data type is required to be float when using class probabilities.
- Output: If reduction is ‘none’, shape ()(), (N)(N) or (N,d1,d2,...,dK)(N, d_1, d_2, ..., d_K) with K≥1K \geq 1in the case of K-dimensional loss, depending on the shape of the input. Otherwise, scalar.
where:
C=number of classesN=batch size\begin{aligned} C ={} & \text{number of classes} \\ N ={} & \text{batch size} \\ \end{aligned}
Examples:
Example of target with class indices
loss = nn.CrossEntropyLoss() input = torch.randn(3, 5, requires_grad=True) target = torch.empty(3, dtype=torch.long).random_(5) output = loss(input, target) output.backward()
Example of target with class probabilities
input = torch.randn(3, 5, requires_grad=True) target = torch.randn(3, 5).softmax(dim=1) output = loss(input, target) output.backward()