Home

Awesome

VA3

Official Code for "VA3: Virtually Assured Amplification Attack on Probabilistic Copyright Protection for Text-to-Image Generative Models" (previous name on arxiv: "Probabilistic Copyright Protection Can Fail for Text-to-Image Generative Models") CVPR 2024 (Highlight)

image

We introduce Virtually Assured Amplification Attack (VA3), a novel online attack framework that exposes the vulnerabilities of probabilistic copyright protection mechanisms. The proposed framework significantly amplifies the probability of generating infringing content on the sustained interactions with generative models and a lower-bounded success probability of each engagement. Our theoretical and experimental results demonstrate the effectiveness of our approach and highlight the potential risk of implementing probabilistic copyright protection in practical applications of text-to-image generative models.

image

Requirements

The code is based on PyTorch and HuggingFace transformers and diffusers.

pip install -r requirements.txt 

Checkpoints

The q1 and q2 models for target image in Figure 1 can be downloaded at Google Drive. You can download and unzip them to ./ckpts.

SSCD model for evaluation can be downloaded at Github-SSCD.

Anti-NAF Optimization

bash ./scripts/run_anti_naf.sh

Arguments:

Sampling

bash ./scripts/run_sample.sh

Arguments:

Evaluation

Evaluation on single prompt:

bash ./scripts/run_eval.sh

Arguments:

Evaluation on prompt selection:

bash ./scripts/run_eval_bandit.sh

Additional arguments: