Code release for "Vision-Language Model IP Protection via Prompt-based Learning" (CVPR 2025)
Vision-Language Model IP Protection via Prompt-based Learning (CVPR 2025)
We propose IP-CLIP, a lightweight ntellectual property (IP) protection strategy tailored to CLIP to protect Vision-Language Models.
The code is implemented with CUDA 11.4, Python 3.8.5 and Pytorch 1.8.0.
Office-31 dataset can be found here.
Office-Home dataset can be found here.
Mini-DomainNet dataset can be found here.
Target-Specified IP-CLIP
python Target_Specified/train.py
Ownership Verification by IP-CLIP
python Ownership/train.py
Target-free IP-CLIP
python Target_Free/train.py
Applicability Authorization by IP-CLIP
python Authorization/train.py
If you find this code useful for your research, please cite our paper:
@article{wang2025vision,
title={Vision-Language Model IP Protection via Prompt-based Learning},
author={Wang, Lianyu and Wang, Meng and Fu, Huazhu and Zhang, Daoqiang},
journal={arXiv preprint arXiv:2503.02393},
year={2025}
}
If you have any problem about our code, feel free to contact
