Skip to content
Corrupted labels and label smoothing
Jupyter Notebook
Branch: master
Clone or download
Latest commit 0f39ede Sep 30, 2017
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
LICENSE
Noisy Labels and Lable Smoothing.ipynb
README.md

README.md

Noisy Labels and Label Smoothing

When we apply the cross-entropy loss to a classification task, we're expecting true labels to have 1, while the others 0. In other words, we have no doubts that the true labels are true, and the others are not. Is that always true? Maybe not. Many manual annotations are the results of multiple participants. They might have different criteria. They might make some mistakes. They are human, after all. As a result, the ground truth labels we have had perfect beliefs on are possible wrong.

One possibile solution to this is to relax our confidence on the labels. For instance, we can slighly lower the loss target values from 1 to, say, 0.9. And naturally we increase the target value of 0 for the others slightly as such. This idea is called label smoothing. Consult this for more information.

In this short project, I examine the effects of label smoothing when there're some noise. Concretly, I'd like to see if label smoothing is effective in a binary classification/labeling task where both labels are noisy or only one label is noisy.

You can’t perform that action at this time.