Awesome
VCP-CLIP οΌAccepted by ECCV 2024οΌ
VCP-CLIP: A visual context prompting model for zero-shot anomaly segmentation
(This project is being continuously updated)
Zhen Qu, Xian Tao, Mukesh Prasad, Fei Shen, Zhengtao Zhang, Xinyi Gong, Guiguang Ding
Table of Contents
- π Introduction
- π§ Environments
- π Data Preparation
- π Run Experiments
- π Citation
- π Acknowledgements
- π License
Introduction
This repository contains source code for VCP-CLIP implemented with PyTorch.
Recently, large-scale vision-language models such as CLIP have demonstrated immense potential in zero-shot anomaly segmentation (ZSAS) task, utilizing a unified model to directly detect anomalies on any unseen product with painstakingly crafted text prompts. However, existing methods often assume that the product category to be inspected is known, thus setting product-specific text prompts, which is difficult to achieve in the data privacy scenarios. Moreover, even the same type of product exhibits significant differences due to specific components and variations in the production process, posing significant challenges to the design of text prompts. In this end, we propose a visual context prompting model (VCP-CLIP) for ZSAS task based on CLIP. The insight behind VCP-CLIP is to employ visual context prompting to activate CLIPβs anomalous semantic perception ability. In specific, we first design a Pre-VCP module to embed global visual information into the text prompt, thus eliminating the necessity for product-specific prompts. Then, we propose a novel Post-VCP module, that adjusts the text embeddings utilizing the fine-grained features of the images. In extensive experiments conducted on 10 real-world industrial anomaly segmentation datasets, VCP-CLIP achieved state-of-the-art performance in ZSAS task.
Environments
Create a new conda environment and install required packages.
conda create -n VCP_env python=3.9
conda activate VCP_env
conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.7 -c pytorch -c nvidia
Experiments are conducted on a NVIDIA RTX 3090.
Data Preparation
MVTec-AD and VisA
1γDownload and prepare the original MVTec-AD and VisA datasets to any desired path. The original dataset format is as follows:
path1
βββ mvtec
βββ bottle
βββ train
βββ good
βββ 000.png
βββ test
βββ good
βββ 000.png
βββ anomaly1
βββ 000.png
βββ anomaly2
βββ 000.png
βββ ground_truth
βββ anomaly1
βββ 000_mask.png
βββ anomaly2
βββ 000_mask.png
path2
βββ visa
βββ candle
βββ Data
βββ Images
βββ Anomaly
βββ 000.JPG
βββ Normal
βββ 0000.JPG
βββ Masks
βββ Anomaly
βββ 000.png
βββ split_csv
βββ 1cls.csv
βββ 1cls.xlsx
2γStandardize the MVTec-AD and VisA datasets to the same format and generate the corresponding .json files.
- run ./dataset/make_dataset_new.py to generate standardized datasets ./dataset/mvisa/data/visa and ./dataset/mvisa/data/mvtec
- run ./dataset/make_meta.py to generate ./dataset/mvisa/data/meta_visa.json and ./dataset/mvisa/data/meta_mvtec.json (This step can be skipped since we have already generated them.)
The format of the standardized datasets is as follows:
./dataset/mvisa/data
βββ visa
βββ candle
βββ train
βββ good
βββ visa_0000_000502.bmp
βββ test
βββ good
βββ visa_0011_000934.bmp
βββ anomaly
βββ visa_000_001000.bmp
βββ ground_truth
βββ anomaly
βββ visa_000_001000.png
βββ mvtec
βββ bottle
βββ train
βββ good
βββ mvtec_000000.bmp
βββ test
βββ good
βββ mvtec_good_000272.bmp
βββ anomaly
βββ mvtec_broken_large_000209.bmp
βββ ground_truth
βββ anomaly
βββ mvtec_broken_large_000209.png
βββ meta_mvtec.json
βββ meta_visa.json
Run Experiments
Prepare the pre-trained weights
1γ Download the CLIP weights pretrained by OpenAI [ViT-L-14-336(default), ViT-B-16-224, ViT-L-14-224] to ./pretrained_weight/
2γIf you are interested, please download one of the pre-trained weights of our VCP-CLIP to ./vcp_weight/. "train_visa.pth" indicates that the auxiliary training dataset is VisA, which you can utilize to test any products outside of the VisA dataset, and vice versa: [train_visa.pth], [train_mvtec.pth]. Note that if you use our pre-trained weights, you must use [ViT-L-14-336] as a default backbone.
Training on the seen products of auxiliary datasets
bash train.sh
Testing and visualizing on the unseen products
bash test.sh
Visualization results
Citation
Please cite the following paper if the code help your project:
@article{qu2024vcpclipvisualcontextprompting,
title={VCP-CLIP: A visual context prompting model for zero-shot anomaly segmentation},
author={Zhen Qu and Xian Tao and Mukesh Prasad and Fei Shen and Zhengtao Zhang and Xinyi Gong and Guiguang Ding},
year={2024},
eprint={2407.12276},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2407.12276}
}
Acknowledgements
We thank the great works WinCLIP(zqhang), WinCLIP(caoyunkang), CoCoOp, AnVoL, APRIL-GAN, AnomalyGPT and AnomalyCLIP for assisting with our work.
License
The code and dataset in this repository are licensed under the MIT license.