Awesome
<div align="center">[ECCV 2024] An Incremental Unified Framework for Small Defect Inspection
This is the official repository for IUF (ECCV 2024).
Jiaqi Tang, Hao Lu, Xiaogang Xu, Ruizheng Wu, Sixing Hu,
Tong Zhang, Twz Wa Cheng, Ming Ge, Ying-Cong Chen* and Fugee Tsung.
*: Corresponding Author
Here is our Project Page with Video!
</div>š Our Setting: Incremental Unified Framework (IUF)
- š© First framework to integrate incremental learning into the unified reconstruction-based detection.
- š© Overcoming memory bank capacity limitations.
- š© Delivering not only image-level performance but also pixel-level location.
š¢ News and Updates
- ā Step 24, 2024. We release the code and dataset of IUF. Check this Google Cloud link for DOWNLOADING dataset.
ā¶ļø Getting Started
<!-- 1. [Installation](#installation) 2. [Dataset](#dataset) 3. [Configuration](#configuration) 5. [Testing](#Testing) 4. [Training](#Training) -->šŖ Installation
- PyTorch >= 1.11.0
- Install dependencies by
pip install -r requirements.txt
š¾ Dataset Preparation
-
Google Drive Link for DOWNLOADING dataset.
-
The dataset is organized as follows:
āāā VisA ā āāā Data ā ā āāā candle ā ā ā āāā ground_truth āā (bad) ā ā ā āāā test āā (Bad) (good) ā ā ā āāā train (good) ā ā āāā capsules ā ā ā āāā ... ā ā āāā ... ā ā ā āāā orisplit ā ā āāā candle_test.json # for testing in candle ā ā āāā candle_test.json # for training in candle ā ā āāā ... | ā ā āāā split ā ā āāā 8_12_train.json # for training in 8-12 class ā ā āāā 8_test.json # for training in 1-8 class ā ā āāā 11_test.json # for training in 1-11 class ā āāā MvTec AD ā āāā mvtec_anomaly_detection # Data ā āāā 33333 # for 33333 class ā āāā json_test # for each class test ā āāā json_train # for each class train ā āāā test_X.json # for X class test ā āāā train_X.json # for X class Train ā āāā ...
-
Task Protocols: Based on the practical requirements of industrial defect inspection, we set up our experiments in both single-step and multi-step settings. We represent our task stream as $\mathbf{X - Y \ with \ N \ Step(s)}$. Here, $\mathbf{X}$ denotes the number of base objects before starting incremental learning, $\mathbf{Y}$ represents the number of new objects incremented in each step, and $\mathbf{N}$ indicates the number of tasks during incremental learning. When training on base objects, $\mathbf{N} = 0$, and after one step, $\mathbf{N} = \mathbf{N} + 1$. Our task stream is shown as follows:
- MVTec-AD: $\mathbf{14-1\ with \ 1\ Step}$, $\mathbf{10-5\ with \ 1\ Step}$, $\mathbf{3 - 3\ with \ 4\ Steps}$ and $\mathbf{10-1\ with \ 5\ Steps}$.
- VisA: $\mathbf{11-1\ with \ 1\ Step}$, $\mathbf{8-4\ with \ 1\ Step}$, $\mathbf{8-1\ with \ 4\ Steps}$
-
Usage: follow the above protocols in incremental learning.
šØ Configuration
-
The configuration files for
training
inexperiments
. -
Dataset Setting:
dataset: type: custom image_reader: type: opencv kwargs: image_dir: /dataset/.../VisA # for data path color_mode: RGB train: meta_file: /dataset/.../A_Data/orisplit/XX.json # for data training json path rebalance: False hflip: False vflip: False rotate: False Val: meta_file: /dataset/.../.../VisA/split/XX.json # for saving previous weight test: meta_file: /dataset/.../A_Data/orisplit/XX.json # for data testing json path
-
Saving Setting:
saver: auto_resume: True always_save: True load_path: checkpoints/ckpt.pth.tar save_dir: checkpoints/ log_dir: log/
-
For Testing: uncomment this part in config
vis_compound: save_dir: vis_compound max_score: null min_score: null vis_single: save_dir: ./vis_single max_score: null min_score: null
š„ļø Training and Testing
-
Modify
Dataset Setting
intraining
configuration, then runsh run.sh
In
run.sh
, it includes two stages:cd /dataset/.../SmallDefect_Vis/IUF # Stage 1: Training base objects CUDA_VISIBLE_DEVICES=0,1,2,3 python ./tools/train_val.py --config /dataset/.../SmallDefect_Vis/IUF/experiments/VisA/8_1_1_1_1/config_c1.yaml # Stage 2: Training incremental objects CUDA_VISIBLE_DEVICES=0,1,2,3 python ./tools/train_val.py --config /dataset/.../SmallDefect_Vis/IUF/experiments/VisA/8_1_1_1_1/config_c9.yaml CUDA_VISIBLE_DEVICES=0,1,2,3 python ./tools/train_val.py --config /dataset/.../SmallDefect_Vis/IUF/experiments/VisA/8_1_1_1_1/config_c10.yaml CUDA_VISIBLE_DEVICES=0,1,2,3 python ./tools/train_val.py --config /dataset/.../SmallDefect_Vis/IUF/experiments/VisA/8_1_1_1_1/config_c11.yaml CUDA_VISIBLE_DEVICES=0,1,2,3 python ./tools/train_val.py --config /dataset/.../SmallDefect_Vis/IUF/experiments/VisA/8_1_1_1_1/config_c12.yaml
You can edit this to support different task protocals.
-
The logs, models and training states will be saved to
./experiments/checkpoints/...
and./experiments/logs/...
. You can also usetensorboard
for monitoring for the./events_dec/...
.
ā” Performance
Compared with other baselines, our model achieves state-of-the-art performance:
ā [Figure 1] Quantitative evaluation in MvTecAD.
ā [Figure 2] Quantitative evaluation in VisA.
ā [Figure 3] Qualitative Evaluation.
š Citations
The following is a BibTeX reference:
@inproceedings{tang2024incremental,
title = {An Incremental Unified Framework for Small Defect Inspection},
author = {Tang, Jiaqi and Lu, Hao and Xu, Xiaogang and Wu, Ruizheng and Hu, Sixing and Zhang, Tong and Cheng, Tsz Wa and Ge, Ming and Chen, Ying-Cong and Tsung, Fugee},
booktitle = {18th European Conference on Computer Vision (ECCV)},
year = {2024}
}
š§ Connecting with Us?
If you have any questions, please feel free to send email to jtang092@connect.hkust-gz.edu.cn
.
š Acknowledgment
The research work was sponsored by AIR@InnoHK. The code is inspired by UniAD.