Awesome
Workshop Link | Challenge Link | Report Paper
Xuhai Chen · Yue Han · Jiangning Zhang
This repository contains the official PyTorch implementation of Zero-/Few-shot Anomaly Classification and Segmentation Method used in the CVPR 2023 VAND Challenge, which can be viewd as an improved version of WinCLIP. We achieve Winner in the Zero-shot Track and Honorable Mentions in the Few-shot Track.
<img src="illustration/main.png" alt="Model Structure" style="max-width: 50px; height: auto;">Results on the Challenge official test set
<img src="illustration/results.png" alt="Model Structure" style="max-width: 50px; height: auto;">Installation
-
Prepare experimental environments
pip install -r requirements.txt
Dataset Preparation
MVTec AD
- Download and extract MVTec AD into
data/mvtec
- run
python data/mvtec.py
to obtaindata/mvtec/meta.json
data
├── mvtec
├── meta.json
├── bottle
├── train
├── good
├── 000.png
├── test
├── good
├── 000.png
├── anomaly1
├── 000.png
├── ground_truth
├── anomaly1
├── 000.png
VisA
- Download and extract VisA into
data/visa
- run
python data/visa.py
to obtaindata/visa/meta.json
data
├── visa
├── meta.json
├── candle
├── Data
├── Images
├── Anomaly
├── 000.JPG
├── Normal
├── 0000.JPG
├── Masks
├── Anomaly
├── 000.png
Train
Set parameters in train.sh
.
train_data_path
: the path to the training datasetdataset
: name of the training dataset, optional: mvtec, visamodel
: the CLIP modelpretrained
: the pretrained weightsfeatures_list
: features to be mapped into the joint embedding spaceimage_size
: the size of the images inputted into the CLIP modelaug_rate
: the probability of stitching images (only for mvtec)
Then run the following command
sh train.sh
Test
Pretrained Models
We provide our pre-trained models in exps/pretrained
, where mvtec_pretrained.pth
represents the model trained on the MVTec AD dataset and visa_pretrained.pth
represents the model trained on the VisA dataset.
Set parameters in test_zero_shot.sh
.
data_path
: the path to the test datasetdataset
: name of the test dataset, optional: mvtec, visacheckpoint_path
: the path to the test model
Then, run the following command to test them in the zero-shot setting:
sh test_zero_shot.sh
Set parameters in test_few_shot.sh
.
data_path
: the path to the test datasetdataset
: name of the test dataset, optional: mvtec, visacheckpoint_path
: the path to the test modelk_shot
: different number of reference images
Then, run the following command to test them in the few-shot setting:
sh test_few_shot.sh
Zero-shot Setting
Set parameters in test_zero_shot.sh
.
data_path
: the path to the test datasetdataset
: name of the test dataset, optional: mvtec, visacheckpoint_path
: the path to the test modelmodel
: the CLIP modelpretrained
: the pretrained weightsfeatures_list
: features to be mapped into the joint embedding spaceimage_size
: the size of the images inputted into the CLIP modelmode
: zero shot or few shot
Then run the following command
sh test_zero_shot.sh
Few-shot Setting
Set parameters in test_few_shot.sh
.
data_path
: the path to the test datasetdataset
: name of the test dataset, optional: mvtec, visacheckpoint_path
: the path to the test modelmodel
: the CLIP modelpretrained
: the pretrained weightsfeatures_list
: features to be mapped into the joint embedding spacefew_shot_features
: features stored in the memory banksimage_size
: the size of the images inputted into the CLIP modelmode
: zero shot or few shotk_shot
: different number of reference imagesseed
: the random seed
Then run the following command
sh test_few_shot.sh
Citation
If our work is helpful for your research, please consider citing:
@article{chen2023zero,
title={A Zero-/Few-Shot Anomaly Classification and Segmentation Method for CVPR 2023 VAND Workshop Challenge Tracks 1\&2: 1st Place on Zero-shot AD and 4th Place on Few-shot AD},
author={Chen, Xuhai and Han, Yue and Zhang, Jiangning},
journal={arXiv preprint arXiv:2305.17382},
year={2023}
}
Acknowledgements
We thank WinCLIP: Zero-/Few-Shot Anomaly Classification and Segmentation for providing assistance for our research.