Adaptive Sample Selection for Robust Learning under Label Noise (IEEE/CVF WACV 2023) Paper
Make a copy of this repo (e.g. with git clone), cd
into the root folder of the repo, and run:
pip install -e .
- PyTorch >= 1.3
- Python >= 3.7
- tqdm, numpy-indexed, etc (which can be easily installed via pip)
This project is organized into folders:
data
should contain all the dataset filesscripts
contain scripts for all the algorithmsresults
should contain all the output pickle files, checkpoints, etc.
cd
into the scripts folder and run algo.py
where algo
is the algorithm used for training.
Filenames for the algorithms (naming convention from paper):
- BARE -
batch_rewgt.py
- MR -
meta_ren.py
- MN -
meta_net.py
- CoT -
coteaching.py
- CoT+ -
coteaching.py
- CL -
curr_loss.py
- CCE -
risk_min_cce.py
For example, if you want to use BARE, then one such command could look like this:
python batch_rewgt.py --dataset mnist --noise_rate 0.4 --noise_type sym --loss_name cce --data_aug 0 --batch_size 128 --num_epoch 200 --num_runs 5
This will train the neural network with CCE loss on un-augmented MNIST dataset which will be corrupted with 40% symmetric label noise. The training will be carried for a batch-size of 128, 200 epochs and a total of 5 runs.
Use the files batch_rewgt_clothing1M.py
and batch_rewgt_food101N.py
instead of batch_rewgt.py
. Ensure that the Clothing-1M
and Food-101N
data resides in the data
folder.
python batch_rewgt_clothing1M.py --loss_name cce --batch_size 256 --num_epoch 15 --num_runs 1
python batch_rewgt_food101N.py --loss_name cce --batch_size 256 --num_epoch 20 --num_runs 1
The following citation can be used:
@inproceedings{bare_wacv_2023,
title={Adaptive Sample Selection for Robust Learning under Label Noise,
author={Patel, Deep and Sastry, P S},
booktitle={WACV},
year={2023}
}