Skip to content

LyWang12/IP-CLIP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

34 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Vision-Language Model IP Protection via Prompt-based Learning

Code release for "Vision-Language Model IP Protection via Prompt-based Learning" (CVPR 2025)

Paper

Vision-Language Model IP Protection via Prompt-based Learning (CVPR 2025)

We propose IP-CLIP, a lightweight ntellectual property (IP) protection strategy tailored to CLIP to protect Vision-Language Models.

Prerequisites

The code is implemented with CUDA 11.4, Python 3.8.5 and Pytorch 1.8.0.

Datasets

Office-31

Office-31 dataset can be found here.

Office-Home

Office-Home dataset can be found here.

Mini-DomainNet

Mini-DomainNet dataset can be found here.

Running the code

Target-Specified IP-CLIP

python Target_Specified/train.py

Ownership Verification by IP-CLIP

python Ownership/train.py

Target-free IP-CLIP

python Target_Free/train.py

Applicability Authorization by IP-CLIP

python Authorization/train.py

Citation

If you find this code useful for your research, please cite our paper:

@article{wang2025vision,
  title={Vision-Language Model IP Protection via Prompt-based Learning},
  author={Wang, Lianyu and Wang, Meng and Fu, Huazhu and Zhang, Daoqiang},
  journal={arXiv preprint arXiv:2503.02393},
  year={2025}
}

Contact

If you have any problem about our code, feel free to contact

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages