Awesome
Defect Spectrum: A Granular Look of Large-Scale Defect Datasets with Rich Semantics [ECCV 2024]
Shuai Yang$^{{*}}$, ZhiFei Chen$^{{*}}$, Pengguang Chen, Xi Fang, Yixun Liang, Shu Liu, Yingcong Chen$^{**}$
HKUST(GZ), HKUST, SmartMore Corp.
${*}$: Equal contribution. **: Corresponding author.
<a href="https://arxiv.org/abs/2310.17316"><img src="https://img.shields.io/badge/arXiv-2310.17316-b31b1b.svg" height=22.5></a> <a href="https://envision-research.github.io/Defect_Spectrum/"><img src="https://img.shields.io/static/v1?label=Project&message=Website&color=red" height=20.5></a> <a href="https://huggingface.co/datasets/DefectSpectrum/Defect_Spectrum"><img src="https://img.shields.io/badge/Dataset-Huggingface-blue" height=20.5></a>
🎏 Introduction
- We introduce the Defect Spectrum, a comprehensive benchmark that offers precise, semantic-abundant, and large-scale annotations for a wide range of industrial defects.
- We introduce the Defect-Gen, a generator designed to create high-quality and diverse defective images, even when working with limited defective data.
💡 Dataset Reannotation
Industrial datasets often lack detailed defect annotations, providing only binary masks or misclassifications. We introduce the Defect Spectrum, a comprehensive dataset with refined, large-scale annotations for various industrial defects. Using four industrial benchmarks, Defect Spectrum enhances annotation accuracy, capturing subtle and previously missed defects. Our dataset includes rich semantic annotations, identifying multiple defect types per image, and offers descriptive captions for each sample, facilitating future Vision Language Model research.
<div align=center> <img src="docs/sup_anno_1.png" width="80%">Example annotation from MVTec AD Dataset
</div>💡 Defect-Gen
Furthermore, we introduce Defect-Gen, a two-stage diffusion-based generator designed to create high-quality and diverse defective images, even when working with limited defective data. The synthetic images generated by Defect-Gen significantly enhance the performance of defect segmentation models, achieving an improvement in mIoU scores up to 9.85 on Defect-Spectrum subsets.
<div align="center"> <img src="docs/pipeline.png" width="80%">Two-staged defect generation pipeline
</div>This generative model excels in producing diverse and high-quality images, even when trained on limited data.
<div align="center"> <img src="docs/qualitative.png" width="80%">Qualitative Comparison between Defect-Gen and other generation methods
</div>🛠️ Installation
- Create an environment with python==3.8.0
conda create -n diff python==3.8.0
. - Activate it
conda activate diff
. - Install basic requirements
pip install -r requirements.txt
.
🚀 Getting Started
Train your own Defect-Gens
- Specify the number of defect types in
train_[large/small].sh
that corresponds to your own needs. e.g. If the "Capsule" object has 7 defective classes, set the--num_defect
to 7. - Prepare your config yaml file for both large and small models. The
-input channel
and-output channel
should be a total of the number of defect types, RGB channels, and background channels(if needed). e.g. For an object that has 7 defective classes, the number of input/output channels should be set to 10. (excluding background) - Run it with
sh train_[large/small].sh
🚀 Inference
- All the checkpoints will be save to
/[working_dir]/checkpoint
. - Update your checkpoints in
inference.sh
. - Specify your switching point for the large and small model in
--step_inference
. - Specify your defective types in
--num_defect
. - Run inference with
sh inference.sh
.
📍 Citation
If you find this project useful in your research, please consider citing:
@misc{yang2023defect,
title={Defect Spectrum: A Granular Look of Large-Scale Defect Datasets with Rich Semantics},
author={Shuai Yang and Zhifei Chen and Pengguang Chen and Xi Fang and Shu Liu and Yingcong Chen},
year={2023},
eprint={2310.17316},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
Acknowledgement
- This work is built upon the Guided-Diffusion and SinDiffusion.
- The datasets we adopted come from Apple-Vision, MVTec-AD, DAGM-2007 and Cotton.
- We would like to extend our greatest thanks to those who helped at SmartMore Corp., regardless of whether their contributions are recognized or remain behind the scenes.