Step 1: Clone the DiffCLIP repository:
To get started, first clone the DiffCLIP repository and navigate to the project directory:
git clone https://github.com/icey-zhang/DiffCLIP
cd DiffCLIP
Step 2: Environment Setup:
DiffCLIP recommends setting up a conda environment and installing dependencies via pip. Use the following commands to set up your environment:
Create and activate a new conda environment
conda create -n DiffCLIP python=3.9.17
conda activate DiffCLIP
install some necessary package
pip install pytorch
......
root
├── Trento
│ ├── HSI.mat
│ ├── LiDAR.mat
│ ├── TRLabel.mat
│ ├── TSLabel.mat
├── ......
python train.py
If our code is helpful to you, please cite:
@article{zhang2024multimodal,
title={Multimodal Informative ViT: Information Aggregation and Distribution for Hyperspectral and LiDAR Classification},
author={Zhang, Jiaqing and Lei, Jie and Xie, Weiying and Yang, Geng and Li, Daixun and Li, Yunsong},
journal={IEEE Transactions on Circuits and Systems for Video Technology},
year={2024},
publisher={IEEE}
}
@inproceedings{zhange2025DiffCLIP,
title={DiffCLIP: Few-shot Language-driven Multimodal Classifier },
author={Zhang, Jiaqing and Cao, Mingxiang and Xue Yang and Jiang, Kai and Yunsong Li},
booktitle={AAAI2025}
}