Skip to content

ys-zong/VLGuard

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 

Repository files navigation

VLGuard

[Website] [Paper] [Data]

Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models.

Dataset

You can find the dataset at Huggingface. train.json and test.json are the meta data of VLGuard and the images are in train.zip and test.zip.

Usage

To fine-tune LLaVA or MiniGPT-v2, you can first run

python convert_to_llava_format.py

to convert VLGuard to LLaVA data format and follow their fine-tuning scripts to do the fine-tuning.

Citation

@article{zong2023safety,
  title={Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models},
  author={Zong, Yongshuo and Bohdal, Ondrej and Yu, Tingyang and Yang, Yongxin and Hospedales Timothy},
  journal={arXiv preprint arXiv:2402.02207},
  year={2024}
}

About

Code for paper: Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages