Home

Awesome

SPot-the-Difference Self-Supervised Pre-training for Anomaly Detection and Segmentation

Yang Zou, Jongheon Jeong, Latha Pemula, Dongqing Zhang, Onkar Dabeer.

Table of Contents

Introduction

This repository contains the resources for our ECCV-2022 paper "SPot-the-Difference Self-Supervised Pre-training for Anomaly Detection and Segmentation". Currently we release the Visual Anomaly (VisA) dataset.

Data description

The VisA dataset contains 12 subsets corresponding to 12 different objects as shown in the above figure. There are 10,821 images with 9,621 normal and 1,200 anomalous samples. Four subsets are different types of printed circuit boards (PCB) with relatively complex structures containing transistors, capacitors, chips, etc. For the case of multiple instances in a view, we collect four subsets: Capsules, Candles, Macaroni1 and Macaroni2. Instances in Capsules and Macaroni2 largely differ in locations and poses. Moreover, we collect four subsets including Cashew, Chewing gum, Fryum and Pipe fryum, where objects are roughly aligned. The anomalous images contain various flaws, including surface defects such as scratches, dents, color spots or crack, and structural defects like misplacement or missing parts.

Object# normal samples# anomaly samples# anomaly classesobject type
PCB11,0041004Complex structure
PCB21,0011004Complex structure
PCB31,0061004Complex structure
PCB41,0051007Complex structure
Capsules6021005Multiple instances
Candles1,0001008Multiple instances
Macaronis11,0001007Multiple instances
Macaronis21,0001007Multiple instances
Cashew5001009Single instance
Chewing gum5031006Single instance
Fryum5001008Single instance
Pipe fryum5001009Single instance

Data download

We host the VisA dataset in AWS S3 and you can download it by this URL.

The data tree of the downloaded data is as follows.

VisA
|-- candle
|-----|--- Data
|-----|-----|----- Images
|-----|-----|--------|------ Anomaly 
|-----|-----|--------|------ Normal 
|-----|-----|----- Masks
|-----|-----|--------|------ Anomaly 
|-----|--- image_anno.csv
|-- capsules
|-----|----- ...

image_annot.csv gives image-level label and pixel-level annotation mask for each image. The id2class map functions for multi-class masks can be found in ./utils/id2class.py Here the masks for normal images are not stored to save space.

Data preparation

To prepare the 1-class, 2-class-highshot, 2-class-fewshot setups described in the original paper, we use the ./utils/prepare_data.py to reorganize data following the data splitting files in "./split_csv/". We give a sample command line for 1-class setup preparation as follows.

python ./utils/prepare_data.py --split-type 1cls --data-folder ./VisA --save-folder ./VisA_pytorch --split-file ./split_csv/1cls.csv

The data tree of the reorganized 1-class setup is as follows.

VisA_pytorch
|-- 1cls
|-----|--- candle
|-----|-----|----- ground_truth
|-----|-----|----- test
|-----|-----|-------|------- good 
|-----|-----|-------|------- bad 
|-----|-----|----- train
|-----|-----|-------|------- good
|-----|--- capsules
|-----|--- ...

Specifically, the reorganized data for 1-class setup follows the data tree of MVTec-AD. For each object, the data has three folders:

Note that the multi-class ground-truth segmentation masks in the original dataset are reindexed to binary masks where 0 indicates normality and 255 indicates anomaly.

In addition, the 2-class setups can be prepared in a similar way by changing the arguments of prepare_data.py.

Metrics computation

To compute classification and segmentation metrics, please refer to ./utils/metrics.py. Note that we take the normal samples into account when computing the localization metrics. This is different from some of the other works disregarding the normal samples in localization.

Citation

Please cite the following paper if this dataset helps your project:

@article{zou2022spot,
  title={SPot-the-Difference Self-Supervised Pre-training for Anomaly Detection and Segmentation},
  author={Zou, Yang and Jeong, Jongheon and Pemula, Latha and Zhang, Dongqing and Dabeer, Onkar},
  journal={arXiv preprint arXiv:2207.14315},
  year={2022}
}

License

The data is released under the CC BY 4.0 license.