Home

Awesome

VLGuard

[Website] [Paper] [Data] [🤗Weights]

Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models.

Updates

Dataset

You can find the dataset at Huggingface. train.json and test.json are the meta data of VLGuard and the images are in train.zip and test.zip.

Evaluation

After setting up the datasets, you can run the following commands to evaluate three subsets of VLGuard: safe_safes, safe_unsafes, and unsafes:

CUDA_VISIBLE_DEVICES=0 python VLGuard_eval.py --dataset unsafes --engine llava15-7b --metaDir /path/to/test.json --imageDir /path/to/VLGuard/test
CUDA_VISIBLE_DEVICES=0 python VLGuard_eval.py --dataset safe_unsafes --engine llava15-7b --metaDir /path/to/test.json --imageDir /path/to/VLGuard/test
CUDA_VISIBLE_DEVICES=0 python VLGuard_eval.py --dataset safe_safes --engine llava15-7b --metaDir /path/to/test.json --imageDir /path/to/VLGuard/test

The scripts will print out the ASR for safe_unsafes, and unsafes with string match (keywords here). The generated predictions will be saved to results folder.

To evaluate the helpfulness with safe_safes subset, run:

OPENAI_API_KEY="" # your OpenAI API key
python gpt4_evaluator.py --file_path results/safe_safes/{the_model_to_evaluate}.json --image_path /path/to/VLGuard/test --reference_path ./data/gpt4_safe_safes.json --output_path /path/to/save/results

It will calculate the win rate against GPT-4V.

Model Weights

We release the weights below. You can use them in exactly the same way as the original LLaVA.

Weights from Mixed Fine-tuning

ModelOriginal VLLMFine-tuning🤗 Checkpoint
LLaVA-v1.5-7B-MixedLLaVA-v1.5-7BFull FTys-zong/llava-v1.5-7b-Mixed
LLaVA-v1.5-7B-Mixed-LoRALLaVA-v1.5-7BLoRAys-zong/llava-v1.5-7b-Mixed-lora
LLaVA-v1.5-13B-MixedLLaVA-v1.5-13BFull FTys-zong/llava-v1.5-13b-Mixed
LLaVA-v1.5-13B-Mixed-LoRALLaVA-v1.5-13BLoRAys-zong/llava-v1.5-13b-Mixed-lora

Weights from Post-hoc Fine-tuning

ModelOriginal VLLMFine-tuning🤗 Checkpoint
LLaVA-v1.5-7B-PosthocLLaVA-v1.5-7BFull FTys-zong/llava-v1.5-7b-Posthoc
LLaVA-v1.5-7B-Posthoc-LoRALLaVA-v1.5-7BLoRAys-zong/llava-v1.5-7b-Posthoc-lora
LLaVA-v1.5-13B-PosthocLLaVA-v1.5-13BFull FTys-zong/llava-v1.5-13b-Posthoc
LLaVA-v1.5-13B-Posthoc-LoRALLaVA-v1.5-13BLoRAys-zong/llava-v1.5-13b-Posthoc-lora

We have also released the weights of "Clean" LLaVA-v1.5 that we re-trained after removing the harmful samples from the training data (Table 1).

ModelLLMFine-tuning🤗 Checkpoint
LLaVA-v1.5-7B-CleanVicuna-7BFull FTys-zong/llava-v1.5-7b-Clean
LLaVA-v1.5-7B-Clean-LoRAVicuna-7BLoRAys-zong/llava-v1.5-7b-Clean-lora
LLaVA-v1.5-13B-CleanVicuna-13BFull FTys-zong/llava-v1.5-13b-Clean
LLaVA-v1.5-13B-Clean-LoRAVicuna-13BLoRAys-zong/llava-v1.5-13b-Clean-lora

Usage

To fine-tune LLaVA or MiniGPT-v2, you can first run

python convert_to_llava_format.py

to convert VLGuard to LLaVA data format and follow their fine-tuning scripts to do the fine-tuning.

Citation

@article{zong2023safety,
  title={Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models},
  author={Zong, Yongshuo and Bohdal, Ondrej and Yu, Tingyang and Yang, Yongxin and Hospedales Timothy},
  journal={arXiv preprint arXiv:2402.02207},
  year={2024}
}