Awesome
MLLM-Refusal
Instructions for reimplementing MLLM-Refusal
1. Install the required packages
git clone https://github.com/Sadcardation/MLLM-Refusal.git
cd MLLM-Refusal
conda env create -f environment.yml
conda activate mllm_refusal
- Oct 16, 2024: Because many libraries have been updated, and running above commands maybe not prepare the environment correctly for this project, we recommend preparing separate environments for each MLLM according to their instructions and installing necessary libraries accordingly. The libraries for a unified environment are listed in
requirements.txt
.
2. Prepare the datasets
Check the datasets from the following links:
- CelebA: Download Link (Validation)
- GQA: Download Link (Test Balanced)
- TextVQA: Download Link (Test)
- VQAv2: Download Link (Validation)
Download the datasets and place them in the datasets
directory. The directory structure should look like this:
MLLM-Refusal
└── datasets
├── CelebA
│ ├── Images
│ │ ├── 166872.jpg
│ │ └── ...
│ ├── sampled_data_100.xlsx
│ └── similar_questions.json
├── GQA
│ ├── Images
│ │ ├── n179334.jpg
│ │ └── ...
│ ├── sampled_data_100.xlsx
│ └── similar_questions.json
├── TextVQA
│ ├── Images
│ │ ├── 6a45a745afb68f73.jpg
│ │ └── ...
│ ├── sampled_data_100.xlsx
│ └── similar_questions.json
└── VQAv2
├── Images
│ └── mscoco
│ └── val2014
│ ├── COCO_val2014_000000000042.jpg
│ └── ...
├── sampled_data_100.xlsx
└── similar_questions.json
sampled_data_100.xlsx
contains the 100 sampled image-question for each dataset. similar_questions.json
contains the similar questions for each questions in the sampled data.
3. Prepare the MLLMs
Clone the MLLM repositories and place them in the models
directory, and follow the install instructions for each MLLM. Include corresponding utils
directory in each MLLM's directory.
-
Additional instructions:
-
Add
config.mm_vision_tower = "openai/clip-vit-large-patch14"
below here to replace original vision encoder
openai/clip-vit-large-patch14-336
LLaVA uses to unify resolutions of perturbed images between different MLLMs. -
Comment all
@torch.no_grad()
forforward
related function in image encoder modeling file clip_encoder.py
-
-
Additional instructions:
-
Add
if kwargs: kwargs['visual']['image_size'] = 224
below here to unify resolutions of perturbed images between different MLLMs.
-
Add
image_emb = None,
as addtional argument for forward function of QWenModel, and replace this line of code with
images = image_emb if image_emb is not None else self.visual.encode(images)
so that image embeddings can directly be passed to the forward function.
-
4. Run the experiments
To produced images with refusal perturbation on 100 sampled images for VQAv2 dataset on LLaVA-1.5 with three different types of shadow questions under default settings, run the following command:
./attack.sh
The results will be saved under LLaVA-1.5's directory.
5. Evaluate the results
To evaluate the results, run the following command:
./evaluate.sh
with corresponding MLLM's directory and the name of the result directory. Refusal Rates will be printed on the terminal and saved in the each result directory.
Citation
If you find MLLM-Refusal helpful in your research, please consider citing:
@article{shao2024refusing,
title={Refusing Safe Prompts for Multi-modal Large Language Models},
author={Shao, Zedian and Liu, Hongbin and Hu, Yuepeng and Gong, Neil Zhenqiang},
journal={arXiv preprint arXiv:2407.09050},
year={2024}
}