Home

Awesome

DOI

Adversarial Library

This library contains various resources related to adversarial attacks implemented in PyTorch. It is aimed towards researchers looking for implementations of state-of-the-art attacks.

The code was written to maximize efficiency (e.g. by preferring low level functions from PyTorch) while retaining simplicity (e.g. by avoiding abstractions). As a consequence, most of the library, and especially the attacks, is implemented using pure functions (whenever possible).

While focused on attacks, this library also provides several utilities related to adversarial attacks: distances (SSIM, CIEDE2000, LPIPS), visdom callback, projections, losses and helper functions. Most notably the function run_attack from utils/attack_utils.py performs an attack on a model given the inputs and labels, with fixed batch size, and reports complexity related metrics (run-time and forward/backward propagations).

Dependencies

The goal of this library is to be up-to-date with newer versions of PyTorch so the dependencies are expected to be updated regularly (possibly resulting in breaking changes).

Installation

You can either install using:

pip install git+https://github.com/jeromerony/adversarial-library

Or you can clone the repo and run:

python setup.py install

Alternatively, you can install (after cloning) the library in editable mode:

pip install -e .

Usage

Attacks are implemented as functions, so they can be called directly by providing the model, samples and labels (possibly with optional arguments):

from adv_lib.attacks import ddn
adv_samples = ddn(model=model, inputs=inputs, labels=labels, steps=300)

Classification attacks all expect the following arguments:

Additionally, many attacks have an optional callback argument which accepts an adv_lib.utils.visdom_logger.VisdomLogger to plot data to a visdom server for monitoring purposes.

For a more detailed example on how to use this library, you can look at this repo: https://github.com/jeromerony/augmented_lagrangian_adversarial_attacks

Contents

Attacks

Classification

Currently the following classification attacks are implemented in the adv_lib.attacks module:

NameKnowledgeTypeDistance(s)ArXiv Link
Carlini and Wagner (C&W)White-boxMinimal$\ell_2$, $\ell_\infty$1608.04644
Projected Gradient Descent (PGD)White-boxBudget$\ell_\infty$1706.06083
Structured Adversarial Attack (StrAttack)White-boxMinimal$\ell_2$ + group-sparsity1808.01664
Decoupled Direction and Norm (DDN)White-boxMinimal$\ell_2$1811.09600
Trust Region (TR)White-boxMinimal$\ell_2$, $\ell_\infty$1812.06371
Fast Adaptive Boundary (FAB)White-boxMinimal$\ell_1$, $\ell_2$, $\ell_\infty$1907.02044
Perceptual Color distance Alternating Loss (PerC-AL)White-boxMinimalCIEDE20001911.02466
Auto-PGD (APGD)White-boxBudget$\ell_1$, $\ell_2$, $\ell_\infty$2003.01690 <br /> 2103.01208
Augmented Lagrangian Method for Adversarial (ALMA)White-boxMinimal$\ell_1$, $\ell_2$, SSIM, CIEDE2000, LPIPS, ...2011.11857
Folded Gaussian Attack (FGA)<br /> Voting Folded Gaussian Attack (VFGA)White-boxMinimal$\ell_0$2011.12423
Fast Minimum-Norm (FMN)White-boxMinimal$\ell_0$, $\ell_1$, $\ell_2$, $\ell_\infty$2102.12827
Primal-Dual Gradient Descent (PDGD)<br /> Primal-Dual Proximal Gradient Descent (PDPGD)White-boxMinimal$\ell_2$<br />$\ell_0$, $\ell_1$, $\ell_2$, $\ell_\infty$2106.01538
σ-zeroWhite-boxMinimal$\ell_0$2402.01879

Bold means that this repository contains the official implementation.

Type refers to the goal of the attack:

Segmentation

The library now includes segmentation attacks in the adv_lib.attacks.segmentation module. These require the following arguments:

The following segmentation attacks are implemented:

NameKnowledgeTypeDistance(s)ArXiv Link
Dense Adversary Generation (DAG)White-boxMinimal$\ell_2$, $\ell_\infty$1703.08603
Adaptive Segmentation Mask Attack (ASMA)White-boxMinimal$\ell_2$1907.13124
Primal-Dual Gradient Descent (PDGD)<br /> Primal-Dual Proximal Gradient Descent (PDPGD)White-boxMinimal$\ell_2$<br />$\ell_0$, $\ell_1$, $\ell_2$, $\ell_\infty$2106.01538
ALMA proxWhite-boxMinimal$\ell_\infty$2206.07179

Italic indicates that the attack is unofficially adapted from the classification variant.

Distances

The following distances are available in the utils adv_lib.distances module:

Contributions

Suggestions and contributions are welcome :)

Citation

If this library has been useful for your research, you can cite it using the "Cite this repository" button in the "About" section.