Awesome
Diffusion Denoising Process for Perceptron Bias in Out-of-distribution Detection
This repo is the official PyTorch implementation for the paper Diffusion Denoising Process for Perceptron Bias in Out-of-distribution Detection
by Luping Liu, Yi Ren, Xize Cheng, Zhou Zhao (Zhejiang University).
What does this work do?
In this work, we provide a new perceptron bias assumption that the discriminator models are more sensitive to some subareas to explain the overconfidence problem. Our detection methods combine the discriminator and generation models, which uses a ResNet to extract features and the diffusion denoising process of a diffusion model (with classifier-free guidance) to reduce overconfidence areas. Our methods get competitive OOD detection results with SOTA methods.
Code structure
- Model: DDIM, iDDPM
- Dataset: Cifar10, Cifar100
- Runner:
- runner: basic training and sampling
- ood_detection: diffusion-based OOD detection
How to run the code
Dependencies
Run the following to install necessary python packages for our code.
pip install -r requirements.txt
Usage
Train the diffusion models through main.py.
python main.py --runner training --config config/ddim_cifar10_cond.yml --train_path temp/train/base_multi
torchrun --nproc_per_node 2 main.py --runner training --config config/ddim_cifar10_cond.yml --train_path temp/train/base_multi
- runner: choose the mode of runner
- device: choose the device to use
- config: choose the config file
- train_path: choose the path to save training status
Evaluate the diffusion models through main.py.
python main.py --runner fid --method PNDM4 --sample_step 50 --device cuda --config config/ddim_cifar10_cond.yml \
--image_path temp/sample --model_path temp/models/ddim/ema_cifar10.ckpt
- method: choose the numerical methods
- sample_step: control the total generation step
- image_path: choose the path to save images
- model_path: choose the path of diffusion model
Generate samples for OOD detection through main.py.
python main.py --runner detection --config config/32_cifar10_cond.yml --method DDIM --sample_step 50 \
--model_path temp/model/ddim_cifar10_cond.ckpt --disc_path temp/model/res18_cifar10_disc.ckpt --repeat_size 4
- disc_path: choose the path of discriminator model
- repeat_size: choose the number of repeat samples
Compute OOD detection results through detect.py.
python detect/ood_detect.py --id_name cifar10 --space logit --repeat_size 4
- id_name: choose the name for in-distribution datasets
- space: choose the detection space generated by the discriminator model
Datasets & checkpoints
All datasets, precalculated statistics for FID and checkpoints of models are provided in this Onedrive.
References
If you find the code useful for your research, please consider citing:
@misc{liu2022diffusion,
title={Diffusion Denoising Process for Perceptron Bias in Out-of-distribution Detection},
author={Luping Liu and Yi Ren and Xize Cheng and Zhou Zhao},
year={2022},
eprint={2211.11255},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
This work is built upon some previous papers which might also interest you:
- Song, Jiaming, Chenlin Meng, and Stefano Ermon. "Denoising Diffusion Implicit Models." International Conference on Learning Representations. 2020.
- Liu, Luping, et al. "Pseudo Numerical Methods for Diffusion Models on Manifolds." International Conference on Learning Representations. 2021.
- Yang, Jingkang, et al. "OpenOOD: Benchmarking Generalized Out-of-Distribution Detection." Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track.