Skip to content

Imbalance-VLM/Imbalance-VLM

Repository files navigation

Exploring Vision-Language Models for Imbalanced Learning

Code and experimental logs for Exploring Vision-Language Models for Imbalanced Learning (accepted by International Journal of Computer Vision). main-figure

Requirements

The code is based on USB.

  • Python 3.10
  • PyTorch 2.0
  • CUDA 11.8

To reproduce our results, you can create our exact same conda environment with:

conda env create -f environment.yml

Datasets

Training

Modify paths to your datasets at scripts/config_generator_imb_clip.py L237 and generate config files:

cd Imbalance-VLM && mkdir logs && mkdir config
python3 scripts/config_generator_imb_clip.py

Then you can run experiments with commands like:

python3 train.py --c ./config/imb_clip_stage1_algs/supervised/imagenet_lt_softmax_None_None_0.yaml

You could also run all commands required to reproduce results generated by scripts/config_generator_imb_clip.py in all_commands.txt with https://github.com/ExpectationMax/simple_gpu_scheduler.

Experiment Results

The logs of training can be found at Internet Archive. All our experiment data (including debug runs) were uploaded to wandb, please refer to our wandb projects: Imagenet_LT, iNaturalist and Places.

Cite US

@article{wang2023exploring,
  title={Exploring Vision-Language Models for Imbalanced Learning},
  author={Wang, Yidong and Yu, Zhuohao and Wang, Jindong and Heng, Qiang and Chen, Hao and Ye, Wei and Xie, Rui and Xie, Xing and Zhang, Shikun},
  journal={International Journal of Computer Vision},
  year={2023}
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published