Home

Awesome

MLLM-Refusal

Instructions for reimplementing MLLM-Refusal

1. Install the required packages

git clone https://github.com/Sadcardation/MLLM-Refusal.git
cd MLLM-Refusal
conda env create -f environment.yml
conda activate mllm_refusal

2. Prepare the datasets

Check the datasets from the following links:

Download the datasets and place them in the datasets directory. The directory structure should look like this:

MLLM-Refusal
└── datasets
    ├── CelebA
    │   ├── Images
    │   │   ├── 166872.jpg
    │   │   └── ...
    │   ├── sampled_data_100.xlsx
    │   └── similar_questions.json
    ├── GQA
    │   ├── Images
    │   │   ├── n179334.jpg
    │   │   └── ...
    │   ├── sampled_data_100.xlsx
    │   └── similar_questions.json
    ├── TextVQA
    │   ├── Images
    │   │   ├── 6a45a745afb68f73.jpg
    │   │   └── ...
    │   ├── sampled_data_100.xlsx
    │   └── similar_questions.json
    └── VQAv2
        ├── Images
        │   └── mscoco
        │       └── val2014
        │           ├── COCO_val2014_000000000042.jpg
        │           └── ...
        ├── sampled_data_100.xlsx
        └── similar_questions.json   

sampled_data_100.xlsx contains the 100 sampled image-question for each dataset. similar_questions.json contains the similar questions for each questions in the sampled data.

3. Prepare the MLLMs

Clone the MLLM repositories and place them in the models directory, and follow the install instructions for each MLLM. Include corresponding utils directory in each MLLM's directory.

4. Run the experiments

To produced images with refusal perturbation on 100 sampled images for VQAv2 dataset on LLaVA-1.5 with three different types of shadow questions under default settings, run the following command:

./attack.sh

The results will be saved under LLaVA-1.5's directory.

5. Evaluate the results

To evaluate the results, run the following command:

./evaluate.sh

with corresponding MLLM's directory and the name of the result directory. Refusal Rates will be printed on the terminal and saved in the each result directory.

Citation

If you find MLLM-Refusal helpful in your research, please consider citing:

@article{shao2024refusing,
  title={Refusing Safe Prompts for Multi-modal Large Language Models},
  author={Shao, Zedian and Liu, Hongbin and Hu, Yuepeng and Gong, Neil Zhenqiang},
  journal={arXiv preprint arXiv:2407.09050},
  year={2024}
}

Acknowledgement