This repository implements the method described in the paper "Self-Supervised Pre-Training with Adaptive Sampling for Fine-Grained Image Retrieval". The project leverages self-supervised pre-training combined with an adaptive sampling strategy (ASS) to build a fine-grained image retrieval (FGIR) system.
- Model Architecture: Integrates contrastive and generative learning with an Adaptive Sample Selector (ASS) that dynamically selects training samples based on difficulty.
- Experimental Setup: Supports training and evaluation on datasets such as CUB-200, Cars-196, SOP, and In-Shop.
- Code Structure: Detailed in the repository structure below.
We recommend creating a virtual environment using conda or virtualenv:
pip install -r requirements.txtAdjust the parameters in experiments/config.yaml as needed, then start training with:
python scripts/train.py --config experiments/config.yamlAfter training, evaluate your model using:
python scripts/evaluate.py --config experiments/config.yaml --checkpoint path/to/model_checkpoint.pth
- src/:Contains the model definitions, modules, dataset loader, and utility tools.
- scripts/: Contains training and evaluation scripts.
- docs/: Documentation for model architecture and usage.
- tests/: Unit tests for various modules.
If you find this project helpful for your work, please cite our paper:
Xiaoqing Li, Ya Wang, "Self-Supervised Pre-Training with Adaptive Sampling for Fine-Grained Image Retrieval", 2025.