Skip to content

hulianyuyy/COMMA

Repository files navigation

COMMA: Co-Articulated Multi-Modal Learning [AAAI 2024]

Official implementation of the paper "COMMA: Co-Articulated Multi-Modal Learning".


Main Contributions

  1. Correlated prompt generation: The prompts of the vision and language branches in these methods are usually separated or uni-directionally correlated. To better guide and align the representations of two branches, we present to compute prompts based on preceding prompts of both branches to aggregate beneficial multi-modal information.
  2. Alleviating Forgetting Generic Knowledge: The essential generic knowledge learned in the pretraining stage is partly forgotten in the fine-tuning process. We propose to alleviate forgetting generic knowledge by minimizing the feature discrepancy between the learnable prompts and hand-crafted prompts of the pretrained CLIP in the last several layers.

☑️ Supported Methods

Method Paper Configs Training Scripts
MaPLe CVPR 2023 link link
CoOp IJCV 2022 link link
Co-CoOp CVPR 2022 link link
Deep Vision Prompting - link link
Deep Language Prompting - link link
Independent V-L Prompting - link link
COMMA (ours) AAAI2024 link link

Results

MaPLe in comparison with existing methods

Results reported below show accuracy for base and novel classes for across 11 recognition datasets averaged over 3 seeds.

Name Base Acc. Novel Acc. HM Epochs
CLIP 69.34 74.22 71.70 -
CoOp 82.69 63.22 71.66 200
CoCoOp 80.47 71.69 75.83 10
KgCoOp 80.73 73.60 77.00 10
MaPLe 82.28 75.14 78.55 5
COMMA (ours) 82.42 75.87 79.04 5

Installation

For installation and other package requirements, please follow the instructions detailed in INSTALL.md.

Data preparation

Please follow the instructions at DATASETS.md to prepare all datasets.

Training and Evaluation

Please refer to the RUN.md for detailed instructions on training and evaluating.

Acknowledgements

Our code is based on Co-CoOp/CoOp and MaPLe repositories. We thank the authors for releasing their code.

About

COMMA: Co-Articulated Multi-Modal Learning (AAAI2024)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published