Skip to content

icey-zhang/DiffCLIP

Repository files navigation

DiffCLIP

DiffCLIP: Few-shot Language-driven Multimodal Classifier

AAAI 2025

Overview

overview

Getting Started

Step 1: Clone the DiffCLIP repository:

To get started, first clone the DiffCLIP repository and navigate to the project directory:

git clone https://github.com/icey-zhang/DiffCLIP
cd DiffCLIP

Step 2: Environment Setup:

DiffCLIP recommends setting up a conda environment and installing dependencies via pip. Use the following commands to set up your environment:

Create and activate a new conda environment

conda create -n DiffCLIP python=3.9.17
conda activate DiffCLIP

install some necessary package

pip install pytorch
......

Prepare the dataset

root
├── Trento
│   ├── HSI.mat
│   ├── LiDAR.mat
│   ├── TRLabel.mat
│   ├── TSLabel.mat
├── ......

Begin to train

python train.py

Citation

If our code is helpful to you, please cite:

@article{zhang2024multimodal,
  title={Multimodal Informative ViT: Information Aggregation and Distribution for Hyperspectral and LiDAR Classification},
  author={Zhang, Jiaqing and Lei, Jie and Xie, Weiying and Yang, Geng and Li, Daixun and Li, Yunsong},
  journal={IEEE Transactions on Circuits and Systems for Video Technology},
  year={2024},
  publisher={IEEE}
}

@inproceedings{zhange2025DiffCLIP,
  title={DiffCLIP: Few-shot Language-driven Multimodal Classifier },
  author={Zhang, Jiaqing and Cao, Mingxiang and Xue Yang and Jiang, Kai and Yunsong Li},
  booktitle={AAAI2025}
}

Star History Chart height="500" />

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages