The official implementation of Test-Time Adaptation with CLIP Reward for Zero-Shot Generalization in Vision-Language Models.
- [17/01/2024] repo online.
- TTA for CLIP OOD classification with RLCF. Prompt tuning + backbone tuning.
- TTA for CLIP retrieval with RLCF.
- Training and TTA for ClipCap and CapDec.
The code in this repo about the three tasks are independent. You can step up them task by task.
First of all, you need to download the dataset and pre-trained models.
-
OOD image classification dataset
- ImageNet
- ImageNet-on-huggingface
- ImageNet-A
- ImageNet-R
- ImageNet-V2
- ImageNet-Sketch
- The code also supports fine-grained datasets used in TPT and ImageNet-C.
-
Retrieval dataset (credit on salesforce/LAVIS)
-
Captioning dataset
-
weights of pre-trained models:
- CLIP-ViT-B/32
- CLIP-ViT-B/16
- CLIP-ViT-L/14
- RN50x64
- facebook/opt-125m
- CoOp Weights
- weights-of-ClipCap Put them at the
${ROOT}/output
. - weights-of-CapDec Put them at the
${ROOT}/output
.
-
For convenient, you can also download all the datasets at BaiduYunPan. Please only use for research or education purposes.
- BaiduYunPan RLCF, code is
d653
.
- BaiduYunPan RLCF, code is
Generally, directories are organized as follows:
${ROOT}
├── dataset
│ │
│ ├──tta_data
│ │ ├──ImageNet
│ │ ├──imagenet-a
│ │ ├──imagenet-r
│ │ ├──ImageNet-Sketch
│ │ └──imagenetv2-matched-frequency-format-val
│ │
│ ├──coco2014
│ ├──nocaps
│ └──flickr30k
│
├── code
│ └── RLCF
│ ├──caption
│ ├──clipscore
│ ├──retrieval
│ └──TPT
│
├── output (save the output of the program)
│
│
├── pretrained
│ ├──opt-125m
│ ├──coop
│ │ └──coop_16shots_nctx4_cscFalse_ctpend_vitb16_seed1
│ │
│ └── clip (download the CLIP pre-trained weights and put them here)
│ └── ViT-B-16.pt
│
...
Requires Python >= 3.8
and PyTorch >= 1.12
.
The following commands are tested on a Linux machine with CUDA Driver Version 525.105.17
and CUDA Version 11.7
.
conda create --name rlcf python=3.8.5
pip install -r requirements.txt
I use
torch==1.13.1+cu117
torchvision==0.14.1+cu117
--extra-index-url https://download.pytorch.org/whl/cu117
in the requirements file.
If you use other versions of cuda, simply remove them (the last 3 lines in the txt file) in requirements.txt
then do
conda create --name rlcf python=3.8.5
conda install pytorch==1.13.1 torchvision==0.14.1 -c pytorch
pip install -r requirements.txt
- Before training, you should set the path properly. Change all
root
variables inTPT/scripts/*.sh
to you path. - Set up the directory of CLIP in the python files properly. Variables
DOWNLOAD_ROOT_v2
inTPT/clip_reward.py
andTPT/clip/custom_clip.py
.
Then you can cd TPT/scripts
,
- For test-time prompt tuning with CLIP reward, refer to
bash rlcf-prompt.sh 0
To evaluate on ImageNet, ImageNet-V2, and ImageNet-Sketch (which has 1000 classes), you will need a GPU with more than (not including) 16GB memory.
- For test-time CLIP image encoder tuning with CLIP reward, refer to
bash rlcf-tune.sh 0
A 16GB GPU card should be enough.
- Before training, you should set the path properly. Change all
root
variables inretrieval/scripts/*.sh
to you path. - Set up the directory of CLIP in the config and python files properly.
- global search
/YOUR/PATH
in theretrieval
directory, and change/YOUR/PATH
to your path. - To name a few,
retrieval/lavis/models/clip_models/pretrained.py
,retrieval/lavis/configs/datasets/coco
andflickr30k
,retrieval/clip_rewards.py
,retrieval/custom_models.py
, ...
- global search
Then you can cd retrieval/scripts
,
- For test-time CLIP image encoder tuning with CLIP reward on COCO2014, refer to
bash tta_coco_ret.sh 0
- For test-time CLIP image encoder tuning with CLIP reward on flickr30k, refer to
bash tta_flickr_ret.sh 0
- Before training, you should set the path properly. Change all
root
variables incaption/scripts/*.sh
to you path. - Set up the directory of CLIP in the python files properly.
- global search
/YOUR/PATH
in thecaption
directory, and change/YOUR/PATH
to your path. - To name a few,
caption/clip_rewards.py
...
- global search
Then you can cd caption/scripts
,
- For TTA with CapDec, COCO --> flickr30k or COCO --> Nocaps, refer to
bash tta_capdec_c2f.sh 0
bash tta_capdec_c2n.sh 0
- For TTA with ClipCap, COCO --> flickr30k or COCO --> Nocaps, refer to
bash tta_clipcap_c2f.sh 0
bash tta_clipcap_c2n.sh 0
- For training with ClipCap or CapDec on COCO, refer to
bash train_capdec_coco.sh 0
bash train_clipcap_coco.sh 0
You need to download the CLIP-features-for-coco or CLIP-features-for-flikcr before training.
- For the evaluation of captioning results, we adopt the scripts from
clipscore
. It includesBleu
,Meteor
,Rouge
,Cider
,CLIPScore
. If you want to getSpice
, try to uncomment line25 inclipscore/generation_eval_utils.py
.
@inproceedings{
zhao2024testtime,
title={Test-Time Adaptation with {CLIP} Reward for Zero-Shot Generalization in Vision-Language Models},
author={Shuai Zhao and Xiaohan Wang and Linchao Zhu and Yi Yang},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=kIP0duasBb}
}
This repo is built upon these previous works.
- azshue/TPT
- openai/CLIP
- mlfoundations/open_clip
- huggingface/transformers
- salesforce/LAVIS
- KaiyangZhou/CoOp
- j-min/CLIP-Caption-Reward
- rmokady/CLIP_prefix_caption
- DavidHuji/CapDec
- mzhaoshuai/CenterCLIP
- VamosC/CoLearning-meet-StitchUp
- VamosC/CLIP4STR
The ghost sentence of this project is cupbearer tinsmith richly automatic rewash liftoff ripcord april fruit voter resent facebook.