Install requirements.
conda env create -f environment.yml
conda activate DLD-env
conda env update --file environment.yml --prune
The name of the environment is set to DLD-env by default. You can modify the first line of the environment.yml file to set the new environment's name.
- ViT pre-trained models are available in the python package at here. Install without dependency:
pip install ftfy regex tqdm
pip install git+https://github.com/openai/CLIP.git --no-dependencies
Trained checkpoints for the directional diffusion models will be available soon.
The IDN used in our experiments are provided in folder noise_label_IDN. The noisy labels are generated following the original paper.
Default values for input arguments are given in the code. An example command is given:
python train_on_CIFAR.py --device cuda:0 --noise_type cifar10-idn-0.1 --nepoch 200 --warmup_epochs 5 --log_name cifar10-idn-0.1.log
The dataset should be downloaded according to the instruction here: Aniaml10N
Default values for input arguments are given in the code. An example command is given:
python train_on_Animal10N.py --device cuda:0 --nepoch 200 --warmup_epochs 5 --log_name Animal10N.log
Download WebVision 1.0 and the validation set of ILSVRC2012 datasets.
python train_on_WebVision.py --gpu_devices 0 1 2 3 4 5 6 7 --nepoch 200 --warmup_epochs 5 --log_name Webvision.log
python test_on_ILSVRC2012.py --gpu_devices 0 1 2 3 4 5 6 7 --log_name ILSVRC2012.log
The dataset should be downloaded according to the instruction here: Clothing1M. Default values for input arguments are given in the code.
python train_on_Clothing1M.py --gpu_devices 0 1 2 3 4 5 6 7 --nepoch 200 --warmup_epochs 5 --log_name Clothing1M.log