Awesome
Official Implementation of AnoVL (Updating)
AnoVL: Adapting Vision-Language Models for Unified Zero-shot Anomaly Localization.
Dataset Preparation
MVTec AD
- Download and extract MVTec AD into
data/mvtec
- run
python data/mvtec.py
to obtaindata/mvtec/meta.json
data
├── mvtec
├── meta.json
├── bottle
├── train
├── good
├── 000.png
├── test
├── good
├── 000.png
├── anomaly1
├── 000.png
├── ground_truth
├── anomaly1
├── 000.png
VisA
- Download and extract VisA into
data/visa
- run
python data/visa.py
to obtaindata/visa/meta.json
data
├── visa
├── meta.json
├── candle
├── Data
├── Images
├── Anomaly
├── 000.JPG
├── Normal
├── 0000.JPG
├── Masks
├── Anomaly
├── 000.png
Test
sh test_zero_shot.sh
Acknowledgements
We thank clip, open_clip, WinCLIP: Zero-/Few-Shot Anomaly Classification and Segmentation, A Zero-/Few-Shot Anomaly Classification and Segmentation Method for CVPR 2023 VAND Workshop Challenge Tracks 1&2: 1st Place on Zero-shot AD and 4th Place on Few-shot AD for providing assistance for our research.
Citation
@article{anovl,
title={AnoVL: Adapting Vision-Language Models for Unified Zero-shot Anomaly Localization},
author={Deng, Hanqiu and Zhang, Zhaoxiang and Bao, Jinan and Li, Xingyu},
journal={arXiv preprint arXiv:2308.15939},
year={2023}
}