Home

Awesome

Official Code for ART: Automatic Red-teaming for Text-to-Image Models to Protect Benign Users

pipeline

Create Conda Environment

conda env create -f environment.yml
conda activate art

Preparation

  1. Download images based on the URLs in Meta Data.json and save them in the imgs folder.
  2. Fine-tune LLaVA-1.6-Mistral-7B on VLM Data.json. Please see the doc.
  3. Fine-tune Llama-2-7B on LLM Train Data.json. Please see the doc.
  4. Fill the access token in prompt_content_dection.py for Meta-Llama-Guard-2-8B.
  5. Set the model path in run_art.sh.

Datasets and Models on Hugging Face

You can find our dataset in url.

You can find our models in url and url.

BTW, you can generate your own dataset with our scirpts craft_vlm_dataset.py and craft_llm_dataset.py.

NOTE: Please rename the ART_GuuideModel, as the LLAVA builder has a strict name matching. Please refer to this issue.

Run the code

You can run the script by for all categories:

./run_art.sh

NOTE: Remember to change the LLAVA_LORA_PATH to your renamed folder

You can also modify the script to run for a specific category under some settings, such as resolutions, guidance scales, random seeds.

We need four GPUs to run the code. The index of the GPUs starts from 0 in our code.

Generate the results

You can run the following script to generate the results:

./run_image_generation.sh

Before that, you need to set seed_list used in generate_images.py to the seeds used in the previous step. The data path should be modified as well.

Evaluation

You can run the following script to evaluate the results:

./run_summary.sh

Before that, you need to set the seed_list used in summarize_results.py to the seeds used in the previous step. The data path should be modified as well.

License

Please follow the license of the Lexica, Llama 3, LLaVA, and Llama 2. The code is under the MIT license.