Making Neural Networks Robust to Label Noise: a Loss Correction Approach (original) (raw)

We present a theoretically grounded approach to train deep neural networks, including recurrent networks, subject to class-dependent label noise. Our method only performs a correction on the loss function, and is agnostic to both the application domain and network architecture. We propose two procedures for loss correction: they simply amount to at most a matrix inversion and multiplication, provided that we know the probability of each class being corrupted into another. We further show how one can estimate these probabilities, adapting a recent technique for noise estimation to the multi-class setting, and thus providing an end-to-end framework. Extensive experiments on MNIST, IMDB, CIFAR-10, CIFAR-100 employing a diversity of architectures — stacking dense, convolutional, pooling, dropout, batch normalization, word embedding, LSTM and residual layers — demonstrate the noise robustness of our proposals. Incidentally, we also prove that, when ReLU is the only non-linearity, the los...