Awesome
ECSO
This repository contains the implementation of the paper:
<img src="./assets/framework.png" alt="drawing" width="800"/>ECSO: Eyes Closed, Safety On: Protecting Multimodal LLMs via Image-to-Text Transformation <br> Yunhao Gou, Kai Chen, Zhili Liu, Lanqing Hong, Hang Xu, Aoxue Li, Zhenguo Li, Dit-Yan Yeung, James T. Kwok, Yu Zhang <br> European Conference on Computer Vision (ECCV), 2024
Installation
-
Clone this repository and navigate to ECSO folder.
git clone https://github.com/gyhdog99/ecso/ cd ECSO-main
-
Install Package
conda create -n ecso python=3.10 -y conda activate ecso pip install --upgrade pip # enable PEP 660 support pip install -e .
Demo
We show the 4 core steps (i.e, 1. direct answer, 2. harm detect, 3. query-aware I2T caption, 4. safe generation w/o images) of ECSO in a Gradio demo, which looks like the following gif:
<img src="./assets/demo.gif" alt="drawing" width="800"/>To launch such a Gradio demo locally, please run the following commands one by one.
Launch a controller
python -m llava.serve.controller --host 0.0.0.0 --port 10000
Launch a gradio web server
python -m llava.serve.gradio_web_server_ecso --controller http://localhost:10000 --model-list-mode reload
You just launched the Gradio web interface. Now, you can open the web interface with the URL printed on the screen. You may notice that there is no model in the model list. Do not worry, as we have not launched any model worker yet. It will be automatically updated when you launch a model worker
Launch a model worker
python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path llava-v1.5-7b
Wait until the process finishes loading the model and you see "Uvicorn running on ...". Now, refresh your Gradio web UI, and you will see the model you just launched in the model list.
Evaluation on Safety Benchmarks
Data/Model Preparation
Download VLSafe, MM-SafetyBench and COCO images
VLSafe
-
Generate direct/ECSO responses
bash scripts/v1_5/eval_safe/gen_vlsafe.sh bash scripts/v1_5/eval_safe/gen_vlsafe_tell_ask.sh
-
Evaluation
bash llava/eval/eval_vlsafe.sh
MM-SafetyBench
-
Generate direct/ECSO responses
bash scripts/v1_5/eval_safe/gen_mmsafe.sh bash scripts/v1_5/eval_safe/gen_mmsafe_tell_ask.sh
-
Evaluation
bash llava/eval/eval_mmsafe_loop.sh
Evaluating Utilities on MLLM benchmarks
Data/Model Preparation
Follow the guideline to download the evaluation data of MME, MMBench and MM-Vet.
MME
Generate direct/ECSO responses
bash scripts/v1_5/eval/mme.sh
bash scripts/v1_5/eval_safe/gen_mme_unsafe_ask.sh
MMBench
Generate direct/ECSO responses
bash scripts/v1_5/eval/mmbench.sh
bash scripts/v1_5/eval_safe/gen_mmbench_unsafe_ask.sh.sh
MM-Vet
Generate direct/ECSO responses
bash scripts/v1_5/eval/mmvet.sh
bash scripts/v1_5/eval_safe/gen_mm-vet_unsafe_ask.sh.sh
Acknowledgement
- LLaVA This repository is built upon LLaVA!
Citation
If you're using ECSO in your research or applications, please cite using this BibTeX:
@article{gou2024eyes,
title={Eyes Closed, Safety On: Protecting Multimodal LLMs via Image-to-Text Transformation},
author={Gou, Yunhao and Chen, Kai and Liu, Zhili and Hong, Lanqing and Xu, Hang and Li, Zhenguo and Yeung, Dit-Yan and Kwok, James T and Zhang, Yu},
journal={arXiv preprint arXiv:2403.09572},
year={2024}
}