Awesome
GeneralAD: Anomaly Detection Across Domains by Attending to Distorted Features
Official PyTorch implementation for "GeneralAD: Anomaly Detection Across Domains by Attending to Distorted Features" ECCV24. For details, see arXiv.
Luc P.J. Sträter*, Mohammadreza Salehi*, Efstratios Gavves, Cees G. M. Snoek, Yuki M. Asano | University of Amsterdam
Method
GeneralAD is a versatile anomaly detection framework designed to operate effectively in semantic, near-distribution, and industrial settings with minimal per-task adjustments. It capitalizes on the inherent design of Vision Transformers, which are trained on image patches, thereby ensuring that the last hidden states retain a patch-based structure. Furthermore, it introduces a novel self-supervised anomaly generation module that employs straightforward operations like noise addition and feature shuffling to construct pseudo-abnormal samples from patch features. These features are then fed to an attention-based discriminator, which is trained to score every patch in the image. Through this approach, GeneralAD can both accurately identify anomalies at the image level and generate interpretable anomaly maps, enhancing its utility across various applications.
Training
<a name="training"> </a>
In the folder GeneralAD\jobs
you can find job files to run GeneralAD, SimpleNet, and KDAD. The job files include all the necessary hyperparameters to train. For monitoring on WandB add the arguments wandb_entity
and wandb_api_key
. If you want to only do inference you have to add the arguments load_checkpoint
and checkpoint_dir
.
Datasets
CIFAR10, CIFAR100, FGVCAircraft and FashionMNIST are loaded from torchvision directly in the code. To run the other datasets download them here: MVTec-AD, MVTec-LOCO, VisA, MPDD, Standford-Cars, View and DogsvsCats.
Requirements
<a name="requirements"> </a>
Our training process is conducted on a single NVIDIA A100-SXM4-40GB GPU. We recommend using conda for installing the necessary packages. If you haven't installed conda yet, you can find instructions here. The steps for installing the requirements are:
1 - Create a new environment from the provided YAML file:
conda env create -f environment.yml
2 - Activate the environment:
conda activate ls_gpu
Results
We achieve the following image-level AUROC results. To reproduce the results see run_general_ad.job in GeneralAD\jobs
. To run the job file add the arguments wandb_entity
and wandb_api_key
and uncomment the desired dataset.
We achieve the following pixel-level AUROC results. To reproduce the results see run_general_ad.job in GeneralAD\jobs
. To run the job file add the arguments wandb_entity
and wandb_api_key
, change the arguments val_monitor="pixel_auroc"
and log_pixel_metrics=1
and uncomment the desired dataset.
We achieve the following qualitative results. To reproduce the results first run GeneralAD, this will save the checkpoints in lightning_logs
folder. Then change the checkpoint_dir
argument in run_segmentation.job in GeneralAD\jobs
.
Baselines
In this repository, we have also implemented the following baseline methods for better comparison:
- KDAD: Knowledge Distillation-based Anomaly Detection, a method leveraging distillation techniques to identify anomalies in data.
- SimpleNet: A lightweight and effective approach tailored for anomaly detection.
Both methods can also be used with Vision Transformer (ViT) backbones.
Citation
<a name="citation"> </a>
If you find this repository useful, please consider giving a star ⭐ and citation 📣:
@inproceedings{strater2025generalad,
title={Generalad: Anomaly detection across domains by attending to distorted features},
author={Str{\"a}ter, Luc PJ and Salehi, Mohammadreza and Gavves, Efstratios and Snoek, Cees GM and Asano, Yuki M},
booktitle={European Conference on Computer Vision},
pages={448--465},
year={2025},
organization={Springer}
}