We provide bash scripts in scripts/ for each prompting variant including LAMM, CoOp+LAMM, MaPLe+LAMM.
Make sure to configure the dataset paths in environment variable DATA
and run the commands from the main directory.
Below we provide training and evaluation instructions for LAMM.
We train LAMM using a single NVIDIA A100 GPU.
The default training settings are provided in config file at configs/trainers/CoOp/vit_b16_ep50_ctxv1.yaml
. All hyper-parameters such as prompt length, prompt depth, etc., can be modified using this config file.
Below, we provide instructions to train CLIP+LAMM on all datasets and seed 1, 2, 3.
bash scripts/lamm/base_train_lamm_all.sh
The default training settings are provided in config file at configs/trainers/CoOp/vit_b16_ep50_ctxv1.yaml
. All hyper-parameters such as prompt length, prompt depth, etc., can be modified using this config file.
Below, we provide instructions to train CoOp+LAMM on all datasets and seed 1, 2, 3.
bash scripts/coop/base_train_coop_lamm_all.sh
The default training settings are provided in config file at configs/trainers/MaPLe/vit_b16_c2_ep5_batch4_2ctx.yaml
. All hyper-parameters such as prompt length, prompt depth, etc., can be modified using this config file.
Below, we provide instructions to train MaPLe+LAMM on all datasets and seed 1, 2, 3.
bash scripts/maple/base_train_maple_lamm_all.sh
Now use the script parse_test_res.py
and run the commands below to calculate the averaged results:
# prints averaged results
python parse_test_res.py output/base/dataset/shots_16/LAMM/vit_b16_ep50_ctxv1
We tested our results by training on set1 and set2 respectively.
bash scripts/lamm/base2new_train_set1_lamm_all.sh
bash scripts/lamm/base2new_train_set2_lamm_all.sh
Since we have already train imagenet, we only need to evaluate the results of other domains.
bash scripts/lamm/base_test_cross_imagenet_lamm.sh
We directly run the CoOp and MaPLe baselines on MaPLe project.