Home

Awesome

DarkSAM

The implementation of our NeurIPS 2024 paper "DarkSAM: Fooling Segment Anything Model to Segment Nothing".

Python 3.8 Pytorch 2.1.0

Abstract

Segment Anything Model (SAM) has recently gained much attention for its outstanding generalization to unseen data and tasks. Despite its promising prospect, the vulnerabilities of SAM, especially to universal adversarial perturbation (UAP) have not been thoroughly investigated yet. In this paper, we propose DarkSAM, the first prompt-free universal attack framework against SAM, including a semantic decoupling-based spatial attack and a texture distortion-based frequency attack. We first divide the output of SAM into foreground and background. Then, we design a shadow target strategy to obtain the semantic blueprint of the image as the attack target. DarkSAM is dedicated to fooling SAM by extracting and destroying crucial object features from images in both spatial and frequency domains. In the spatial domain, we disrupt the semantics of both the foreground and background in the image to confuse SAM. In the frequency domain, we further enhance the attack effectiveness by distorting the high-frequency components (i.e., texture information) of the image. Consequently, with a single UAP, DarkSAM renders SAM incapable of segmenting objects across diverse images with varying prompts. Experimental results on four datasets for SAM and its two variant models demonstrate the powerful attack capability and transferability of DarkSAM.

<img src="image/pipeline.png"/>

Latest Update

2024/11/4 We have released the official implementation code.

Setup

git clone https://github.com/CGCL-codes/DarkSAM.git
cd DarkSAM
# use anaconda to build environment 
conda create -n DarkSAM python=3.8
conda activate DarkSAM
# install packages
pip install -r requirements.txt

Quick Start

cd repo
bash init_repos.sh
cd ..
python darksam_attack.py   # results saved in uap_file/SA1B.pth
python darksam_test.py # results saved in /result/test

BibTeX

If you find DarkSAM both interesting and helpful, please consider citing us in your research or publications:

@inproceedings{zhou2024darksam,
  title={Darksam: Fooling segment anything model to segment nothing},
  author={Zhou, Ziqi and Song, Yufei and Li, Minghui and Hu, Shengshan and Wang, Xianlong and Zhang, Leo Yu and Yao, Dezhong and Jin, Hai},
  booktitle={Proceedings of the 38th Annual Conference on Neural Information Processing Systems (NeurIPS'24)},
  year={2024}
}